modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
andrewzolensky/bert-emotion
3b1a7f9a848ba99d439563d215d21a59b1ec0ee4
2022-05-26T01:51:23.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
andrewzolensky
null
andrewzolensky/bert-emotion
6
null
transformers
15,700
Entry not found
SamuelMiller/lil_sum_sum
66bd3eef4daa6a0afef9a67d3178fcd273c011f3
2022-05-23T05:04:39.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
SamuelMiller
null
SamuelMiller/lil_sum_sum
6
null
transformers
15,701
Entry not found
SamuelMiller/lil_sumsum
021ed1b27a589b9d81c513932f69f3119544e704
2022-05-23T19:49:44.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
SamuelMiller
null
SamuelMiller/lil_sumsum
6
null
transformers
15,702
## This is the model for the 'Sum_it' app ## Find it at HuggingFace Spaces! https://huggingface.co/spaces/SamuelMiller/sum_it
renjithks/layoutlmv3-er-ner
23b32a5762766b8571bf033af8420c3118c64c09
2022-05-31T17:36:05.000Z
[ "pytorch", "tensorboard", "layoutlmv3", "token-classification", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
renjithks
null
renjithks/layoutlmv3-er-ner
6
null
transformers
15,703
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-er-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-er-ner This model is a fine-tuned version of [renjithks/layoutlmv3-cord-ner](https://huggingface.co/renjithks/layoutlmv3-cord-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2025 - Precision: 0.6442 - Recall: 0.6761 - F1: 0.6598 - Accuracy: 0.9507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 22 | 0.2940 | 0.4214 | 0.2956 | 0.3475 | 0.9147 | | No log | 2.0 | 44 | 0.2487 | 0.4134 | 0.4526 | 0.4321 | 0.9175 | | No log | 3.0 | 66 | 0.1922 | 0.5399 | 0.5460 | 0.5429 | 0.9392 | | No log | 4.0 | 88 | 0.1977 | 0.5653 | 0.5813 | 0.5732 | 0.9434 | | No log | 5.0 | 110 | 0.2018 | 0.6173 | 0.6252 | 0.6212 | 0.9477 | | No log | 6.0 | 132 | 0.1823 | 0.6232 | 0.6153 | 0.6192 | 0.9485 | | No log | 7.0 | 154 | 0.1972 | 0.6203 | 0.6238 | 0.6220 | 0.9477 | | No log | 8.0 | 176 | 0.1952 | 0.6292 | 0.6407 | 0.6349 | 0.9511 | | No log | 9.0 | 198 | 0.2070 | 0.6331 | 0.6492 | 0.6411 | 0.9489 | | No log | 10.0 | 220 | 0.2025 | 0.6442 | 0.6761 | 0.6598 | 0.9507 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Lucifer-nick/coconut_smiles
635069d5fe9c16a685cf8a2acb6ac626518efc34
2022-07-05T08:59:51.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Lucifer-nick
null
Lucifer-nick/coconut_smiles
6
null
transformers
15,704
--- license: apache-2.0 ---
luisu0124/Amazon_review
b59c73fb83ac0a246698d0ea5e370087af58642c
2022-05-26T03:28:01.000Z
[ "pytorch", "bert", "text-classification", "es", "transformers", "Text Classification" ]
text-classification
false
luisu0124
null
luisu0124/Amazon_review
6
null
transformers
15,705
--- language: - es tags: - Text Classification --- ## language: - es ## tags: - amazon_reviews_multi - Text Clasiffication ### Dataset ![alt text](https://github.com/LuisU0124/IImage-NLP/blob/main/tokenmiz.png?raw=true) ### Example structure review: | review_id (string) | product_id (string) | reviewer_id (string) | stars (int) | review_body (string) | review_title (string) | language (string) | product_category (string) | | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | de_0203609|product_de_0865382|reviewer_de_0267719|1|Armband ist leider nach 1 Jahr kaputt gegangen|Leider nach 1 Jahr kaputt|de|sports| ### Model ![alt text](https://github.com/LuisU0124/IImage-NLP/blob/main/Model.png?raw=true) ### Model train ![alt text](https://github.com/LuisU0124/IImage-NLP/blob/main/model%20train.png?raw=true) | Text | Classification | | ------------- | ------------- | | review_body | stars | ### Model test ![alt text](https://github.com/LuisU0124/IImage-NLP/blob/main/test%20model.png?raw=true) ### Clasiffication reviews in Spanish Uses `POS`, `NEG` labels.
Danastos/qacombined_bert_el_4
7a223a795d0abae49192226a9136993f2f748128
2022-05-24T22:01:23.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
Danastos
null
Danastos/qacombined_bert_el_4
6
null
transformers
15,706
Entry not found
OHenry/finetuned-neural-bert-ner
96299c07a4eb82934c96acfa5b462f72f8020fc0
2022-05-25T13:42:27.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
OHenry
null
OHenry/finetuned-neural-bert-ner
6
null
transformers
15,707
Entry not found
chanind/frame-semantic-transformer-large
348f581f4794e6d7c9be1e6e0a7a6076a77f6a37
2022-05-26T08:46:32.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
chanind
null
chanind/frame-semantic-transformer-large
6
null
transformers
15,708
Entry not found
ryan1998/distilbert-base-uncased-finetuned-emotion
2dcee06e0046c4586bc3a2f8493724f40f73b551
2022-05-26T14:32:56.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ryan1998
null
ryan1998/distilbert-base-uncased-finetuned-emotion
6
null
transformers
15,709
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5280 - Accuracy: 0.2886 - F1: 0.2742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 1316 | 2.6049 | 0.2682 | 0.2516 | | No log | 2.0 | 2632 | 2.5280 | 0.2886 | 0.2742 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
prodm93/GPT2Dynamic_title_model_v1
bce0035725479c756d6a1b128a82bd539ee1b587
2022-05-26T19:01:52.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
prodm93
null
prodm93/GPT2Dynamic_title_model_v1
6
null
transformers
15,710
Entry not found
jkhan447/language-detection-RoBerta-base-additional
9f5c70aace6ade1b2d151bcf6b7ef7ec41d58c06
2022-05-30T09:38:00.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
jkhan447
null
jkhan447/language-detection-RoBerta-base-additional
6
null
transformers
15,711
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: language-detection-RoBerta-base-additional results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # language-detection-RoBerta-base-additional This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1367 - Accuracy: 0.9874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
zenkri/autotrain-Arabic_Poetry_by_Subject-920730230
eb837305e1c60c42357dd69ee3c7ec2a9efc7360
2022-05-28T08:41:57.000Z
[ "pytorch", "bert", "text-classification", "ar", "dataset:zenkri/autotrain-data-Arabic_Poetry_by_Subject-1d8ba412", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
zenkri
null
zenkri/autotrain-Arabic_Poetry_by_Subject-920730230
6
null
transformers
15,712
--- tags: autotrain language: ar widget: - text: "I love AutoTrain 🤗" datasets: - zenkri/autotrain-data-Arabic_Poetry_by_Subject-1d8ba412 co2_eq_emissions: 0.07445219847409645 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 920730230 - CO2 Emissions (in grams): 0.07445219847409645 ## Validation Metrics - Loss: 0.5806193351745605 - Accuracy: 0.8785200718993409 - Macro F1: 0.8208042310550474 - Micro F1: 0.8785200718993409 - Weighted F1: 0.8783590365809876 - Macro Precision: 0.8486540338838363 - Micro Precision: 0.8785200718993409 - Weighted Precision: 0.8815185727115001 - Macro Recall: 0.8121110408113442 - Micro Recall: 0.8785200718993409 - Weighted Recall: 0.8785200718993409 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zenkri/autotrain-Arabic_Poetry_by_Subject-920730230 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("zenkri/autotrain-Arabic_Poetry_by_Subject-920730230", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("zenkri/autotrain-Arabic_Poetry_by_Subject-920730230", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
autoevaluate/translation
a7b2c3ce3e88c03c59c6abae0b14991e11ec4f8e
2022-05-28T14:31:28.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
autoevaluate
null
autoevaluate/translation
6
null
transformers
15,713
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wmt16 metrics: - bleu model-index: - name: translation results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: wmt16 type: wmt16 args: ro-en metrics: - name: Bleu type: bleu value: 28.5866 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # translation This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.3170 - Bleu: 28.5866 - Gen Len: 33.9575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 0.8302 | 0.03 | 1000 | 1.3170 | 28.5866 | 33.9575 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
ashesicsis1/xlsr-english
4b751f41d013ae6483f5053c2869d616b4d690f4
2022-05-29T14:47:54.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:librispeech_asr", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
ashesicsis1
null
ashesicsis1/xlsr-english
6
null
transformers
15,714
--- license: apache-2.0 tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: xlsr-english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlsr-english This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.3098 - Wer: 0.1451 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2453 | 2.37 | 400 | 0.5789 | 0.4447 | | 0.3736 | 4.73 | 800 | 0.3737 | 0.2850 | | 0.1712 | 7.1 | 1200 | 0.3038 | 0.2136 | | 0.117 | 9.47 | 1600 | 0.3016 | 0.2072 | | 0.0897 | 11.83 | 2000 | 0.3158 | 0.1920 | | 0.074 | 14.2 | 2400 | 0.3137 | 0.1831 | | 0.0595 | 16.57 | 2800 | 0.2967 | 0.1745 | | 0.0493 | 18.93 | 3200 | 0.3192 | 0.1670 | | 0.0413 | 21.3 | 3600 | 0.3176 | 0.1644 | | 0.0322 | 23.67 | 4000 | 0.3079 | 0.1598 | | 0.0296 | 26.04 | 4400 | 0.2978 | 0.1511 | | 0.0235 | 28.4 | 4800 | 0.3098 | 0.1451 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Ayush414/distilbert-base-uncased-finetuned-ner
03280eebc648f4737e7f535fc6edb061cd517bff
2022-05-30T12:36:18.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
Ayush414
null
Ayush414/distilbert-base-uncased-finetuned-ner
6
null
transformers
15,715
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9253929599291565 - name: Recall type: recall value: 0.9352276541000112 - name: F1 type: f1 value: 0.9302843153619317 - name: Accuracy type: accuracy value: 0.9835258233116749 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0628 - Precision: 0.9254 - Recall: 0.9352 - F1: 0.9303 - Accuracy: 0.9835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2388 | 1.0 | 878 | 0.0723 | 0.9108 | 0.9186 | 0.9147 | 0.9798 | | 0.0526 | 2.0 | 1756 | 0.0633 | 0.9176 | 0.9290 | 0.9232 | 0.9817 | | 0.0303 | 3.0 | 2634 | 0.0628 | 0.9254 | 0.9352 | 0.9303 | 0.9835 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
dunlp/GWW
37b2823459ed104783827c9742ea2f58f4c659ef
2022-06-29T09:36:26.000Z
[ "pytorch", "tensorboard", "bert", "fill-mask", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
fill-mask
false
dunlp
null
dunlp/GWW
6
null
transformers
15,716
--- tags: - generated_from_trainer model-index: - name: GWW results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GWW This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on Dutch civiel works dataset. It achieves the following results on the evaluation set: - Loss: 2.7097 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7179 | 1.0 | 78 | 3.1185 | | 3.1134 | 2.0 | 156 | 2.8528 | | 2.9327 | 3.0 | 234 | 2.7249 | | 2.8377 | 4.0 | 312 | 2.7255 | | 2.7888 | 5.0 | 390 | 2.6737 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
erfangc/test1
20e7cfec04730aff44c89a7bb89a49fa01715e60
2022-05-30T18:23:14.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
erfangc
null
erfangc/test1
6
null
transformers
15,717
Entry not found
hhhhzy/roberta-pubhealth
6f7133efa090591da36a4d75963c6d7d31b0e4a9
2022-05-30T23:01:52.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
hhhhzy
null
hhhhzy/roberta-pubhealth
6
null
transformers
15,718
# Roberta-Pubhealth model This model is a fine-tuned version of [RoBERTa Base](https://huggingface.co/roberta-base) on the health_fact dataset. It achieves the following results on the evaluation set: - micro f1 (accuracy): 0.7137 - macro f1: 0.6056 - weighted f1: 0.7106 - samples predicted per second: 9.31 ## Dataset desctiption [PUBHEALTH](https://huggingface.co/datasets/health_fact)is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label. ## Training hyperparameters The model are trained with the following tuned config: - model: roberta base - batch size: 32 - learning rate: 5e-5 - number of epochs: 4 - warmup steps: 0
Jiexing/cosql_add_coref_t5_3b_order_0519_ckpt-576
e2289714e6fbffbe1b55111a1a45f6c7ca2f61b3
2022-05-31T02:22:25.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Jiexing
null
Jiexing/cosql_add_coref_t5_3b_order_0519_ckpt-576
6
null
transformers
15,719
Entry not found
mccaffary/finetuning-sentiment-model-3000-samples-DM
644eb1d0baa3d4758427c85666e74b56636c4df6
2022-06-01T09:01:21.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
mccaffary
null
mccaffary/finetuning-sentiment-model-3000-samples-DM
6
null
transformers
15,720
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples-DM results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8734177215189873 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples-DM This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3248 - Accuracy: 0.8667 - F1: 0.8734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.8.0 - Datasets 2.2.2 - Tokenizers 0.12.1
Classroom-workshop/assignment1-jack
890c29ec8d66df7ec2eee93b88456526d0d9ea2f
2022-06-02T15:22:42.000Z
[ "pytorch", "tf", "speech_to_text", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "arxiv:2010.05171", "arxiv:1904.08779", "transformers", "speech", "audio", "hf-asr-leaderboard", "license:mit", "model-index" ]
automatic-speech-recognition
false
Classroom-workshop
null
Classroom-workshop/assignment1-jack
6
null
transformers
15,721
--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: mit pipeline_tag: automatic-speech-recognition widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: s2t-small-librispeech-asr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 4.3 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 9.0 --- # S2T-SMALL-LIBRISPEECH-ASR `s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) ## Model description S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard autoregressive cross-entropy loss and generates the transcripts autoregressively. ## Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints. ### How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* *Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece) so be sure to install those packages before running the examples.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) input_features = processor( ds[0]["audio"]["array"], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_ids=input_features) transcription = processor.batch_decode(generated_ids) ``` #### Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset. ```python from datasets import load_dataset, load_metric from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset wer = load_metric("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True) librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(predictions=result["transcription"], references=result["text"])) ``` *Result (WER)*: | "clean" | "other" | |:-------:|:-------:| | 4.3 | 9.0 | ## Training data The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of approximately 1000 hours of 16kHz read English speech. ## Training procedure ### Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. ### Training The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779). The encoder receives speech features, and the decoder generates the transcripts autoregressively. ### BibTeX entry and citation info ```bibtex @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, } ```
bradgrimm/patent-cpc-predictor
4ebf5d51044c5ced9d964956ae2a5de40b669d42
2022-06-02T22:33:47.000Z
[ "pytorch", "deberta-v2", "feature-extraction", "en", "transformers", "patent", "deberta", "license:mit" ]
feature-extraction
false
bradgrimm
null
bradgrimm/patent-cpc-predictor
6
null
transformers
15,722
--- language: en tags: - patent - deberta license: mit --- # Patent CPC Predictor This is a fine-tuned version of microsoft/deberta-v3-small for predicting Patent CPC codes. # Dataset Dataset consists of titles and abstracts sampled from granted patent applications: https://www.kaggle.com/datasets/grimmace/sampled-patent-titles # Results | Category | Accuracy | | --- | ----------- | | Section | 92% | | Class | 88% | | Subclass | 85% |
ArthurZ/opt-1.3b
340576fcb4f2edbd6ea82a907fe85a50bb913965
2022-06-21T14:34:58.000Z
[ "pytorch", "tf", "jax", "opt", "text-generation", "transformers", "generated_from_keras_callback", "model-index" ]
text-generation
false
ArthurZ
null
ArthurZ/opt-1.3b
6
null
transformers
15,723
--- tags: - generated_from_keras_callback model-index: - name: opt-1.3b results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # opt-1.3b This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - TensorFlow 2.9.1 - Datasets 2.2.2 - Tokenizers 0.12.1
madatnlp/torch-trinity
16d66ff35d40ec7eefdc4758682812a9e0734379
2022-06-03T06:59:20.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
madatnlp
null
madatnlp/torch-trinity
6
null
transformers
15,724
Entry not found
Eulaliefy/distilbert-base-uncased-finetuned-ner
c2b6afb7e9771459d736b71a3db335d165869c9f
2022-06-03T18:21:14.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
Eulaliefy
null
Eulaliefy/distilbert-base-uncased-finetuned-ner
6
null
transformers
15,725
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9250691754288877 - name: Recall type: recall value: 0.9350039154267815 - name: F1 type: f1 value: 0.9300100144653389 - name: Accuracy type: accuracy value: 0.9836052552147044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0620 - Precision: 0.9251 - Recall: 0.9350 - F1: 0.9300 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2356 | 1.0 | 878 | 0.0699 | 0.9110 | 0.9225 | 0.9167 | 0.9801 | | 0.0509 | 2.0 | 1756 | 0.0621 | 0.9180 | 0.9314 | 0.9246 | 0.9823 | | 0.0303 | 3.0 | 2634 | 0.0620 | 0.9251 | 0.9350 | 0.9300 | 0.9836 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
ricardo-filho/bert_base_tcm_0.6
e6defd66f05a64373ad6c74b7d1eec37637dada1
2022-06-09T14:15:12.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
ricardo-filho
null
ricardo-filho/bert_base_tcm_0.6
6
null
transformers
15,726
--- license: mit tags: - generated_from_trainer model-index: - name: bert_base_tcm_0.6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_base_tcm_0.6 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0193 - Criterio Julgamento Precision: 0.8875 - Criterio Julgamento Recall: 0.8659 - Criterio Julgamento F1: 0.8765 - Criterio Julgamento Number: 82 - Data Sessao Precision: 0.7571 - Data Sessao Recall: 0.9636 - Data Sessao F1: 0.848 - Data Sessao Number: 55 - Modalidade Licitacao Precision: 0.9394 - Modalidade Licitacao Recall: 0.9718 - Modalidade Licitacao F1: 0.9553 - Modalidade Licitacao Number: 319 - Numero Exercicio Precision: 0.9172 - Numero Exercicio Recall: 0.9688 - Numero Exercicio F1: 0.9422 - Numero Exercicio Number: 160 - Objeto Licitacao Precision: 0.4659 - Objeto Licitacao Recall: 0.7069 - Objeto Licitacao F1: 0.5616 - Objeto Licitacao Number: 58 - Valor Objeto Precision: 0.8333 - Valor Objeto Recall: 0.9211 - Valor Objeto F1: 0.875 - Valor Objeto Number: 38 - Overall Precision: 0.8537 - Overall Recall: 0.9340 - Overall F1: 0.8920 - Overall Accuracy: 0.9951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0252 | 1.0 | 1963 | 0.0202 | 0.8022 | 0.8902 | 0.8439 | 82 | 0.7391 | 0.9273 | 0.8226 | 55 | 0.9233 | 0.9812 | 0.9514 | 319 | 0.8966 | 0.975 | 0.9341 | 160 | 0.4730 | 0.6034 | 0.5303 | 58 | 0.7083 | 0.8947 | 0.7907 | 38 | 0.8327 | 0.9298 | 0.8786 | 0.9948 | | 0.0191 | 2.0 | 3926 | 0.0226 | 0.8554 | 0.8659 | 0.8606 | 82 | 0.5641 | 0.4 | 0.4681 | 55 | 0.9572 | 0.9812 | 0.9690 | 319 | 0.9273 | 0.9563 | 0.9415 | 160 | 0.3770 | 0.3966 | 0.3866 | 58 | 0.8571 | 0.7895 | 0.8219 | 38 | 0.8620 | 0.8596 | 0.8608 | 0.9951 | | 0.0137 | 3.0 | 5889 | 0.0193 | 0.8875 | 0.8659 | 0.8765 | 82 | 0.7571 | 0.9636 | 0.848 | 55 | 0.9394 | 0.9718 | 0.9553 | 319 | 0.9172 | 0.9688 | 0.9422 | 160 | 0.4659 | 0.7069 | 0.5616 | 58 | 0.8333 | 0.9211 | 0.875 | 38 | 0.8537 | 0.9340 | 0.8920 | 0.9951 | | 0.0082 | 4.0 | 7852 | 0.0210 | 0.8780 | 0.8780 | 0.8780 | 82 | 0.7966 | 0.8545 | 0.8246 | 55 | 0.9512 | 0.9781 | 0.9645 | 319 | 0.9023 | 0.9812 | 0.9401 | 160 | 0.5385 | 0.6034 | 0.5691 | 58 | 0.9 | 0.9474 | 0.9231 | 38 | 0.8810 | 0.9256 | 0.9027 | 0.9963 | | 0.0048 | 5.0 | 9815 | 0.0222 | 0.8261 | 0.9268 | 0.8736 | 82 | 0.7969 | 0.9273 | 0.8571 | 55 | 0.9512 | 0.9781 | 0.9645 | 319 | 0.9231 | 0.975 | 0.9483 | 160 | 0.6515 | 0.7414 | 0.6935 | 58 | 0.875 | 0.9211 | 0.8974 | 38 | 0.8867 | 0.9452 | 0.9150 | 0.9964 | | 0.0044 | 6.0 | 11778 | 0.0262 | 0.8276 | 0.8780 | 0.8521 | 82 | 0.7681 | 0.9636 | 0.8548 | 55 | 0.9541 | 0.9781 | 0.9659 | 319 | 0.9235 | 0.9812 | 0.9515 | 160 | 0.5263 | 0.6897 | 0.5970 | 58 | 0.9211 | 0.9211 | 0.9211 | 38 | 0.8722 | 0.9396 | 0.9047 | 0.9959 | | 0.0042 | 7.0 | 13741 | 0.0246 | 0.8523 | 0.9146 | 0.8824 | 82 | 0.7656 | 0.8909 | 0.8235 | 55 | 0.9509 | 0.9718 | 0.9612 | 319 | 0.9118 | 0.9688 | 0.9394 | 160 | 0.5938 | 0.6552 | 0.6230 | 58 | 0.8974 | 0.9211 | 0.9091 | 38 | 0.8815 | 0.9298 | 0.9050 | 0.9960 | | 0.0013 | 8.0 | 15704 | 0.0294 | 0.8295 | 0.8902 | 0.8588 | 82 | 0.7391 | 0.9273 | 0.8226 | 55 | 0.9543 | 0.9812 | 0.9675 | 319 | 0.9070 | 0.975 | 0.9398 | 160 | 0.6094 | 0.6724 | 0.6393 | 58 | 0.875 | 0.9211 | 0.8974 | 38 | 0.8765 | 0.9368 | 0.9056 | 0.9961 | | 0.0019 | 9.0 | 17667 | 0.0303 | 0.8690 | 0.8902 | 0.8795 | 82 | 0.8305 | 0.8909 | 0.8596 | 55 | 0.9538 | 0.9718 | 0.9627 | 319 | 0.9290 | 0.9812 | 0.9544 | 160 | 0.6441 | 0.6552 | 0.6496 | 58 | 0.9211 | 0.9211 | 0.9211 | 38 | 0.9019 | 0.9298 | 0.9156 | 0.9961 | | 0.0007 | 10.0 | 19630 | 0.0295 | 0.8488 | 0.8902 | 0.8690 | 82 | 0.7903 | 0.8909 | 0.8376 | 55 | 0.9571 | 0.9781 | 0.9674 | 319 | 0.9181 | 0.9812 | 0.9486 | 160 | 0.6393 | 0.6724 | 0.6555 | 58 | 0.9211 | 0.9211 | 0.9211 | 38 | 0.8938 | 0.9340 | 0.9135 | 0.9962 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
haritzpuerto/distilbert-squad
0dfcf0cb9cc78471945ed00aed6e20df4b6afe4b
2022-06-03T20:08:44.000Z
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
haritzpuerto
null
haritzpuerto/distilbert-squad
6
null
transformers
15,727
TrainOutput(global_step=5475, training_loss=1.7323438837756848, metrics={'train_runtime': 4630.6634, 'train_samples_per_second': 18.917, 'train_steps_per_second': 1.182, 'total_flos': 1.1445080909703168e+16, 'train_loss': 1.7323438837756848, 'epoch': 1.0})
ssantanag/pasajes_de_la_biblia
49688f7cbfd2bd46188fd22bd7ed8467bd0bd135
2022-06-04T04:32:36.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "transformers", "generated_from_keras_callback", "model-index", "autotrain_compatible" ]
text2text-generation
false
ssantanag
null
ssantanag/pasajes_de_la_biblia
6
null
transformers
15,728
--- tags: - generated_from_keras_callback model-index: - name: pasajes_de_la_biblia results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pasajes_de_la_biblia Este modelo fue entrenado con el dataset publicado en Kaggle de los versiculos de la biblia en el siguiente enlace puede encontrar el dataset https://www.kaggle.com/datasets/camesruiz/biblia-ntv-spanish-bible-ntv. ## Training and evaluation data la distribución de la data fue la siguiente: - Training set: 58.20% - Validation set: 9.65% - Test set: 32.15% ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
Jeevesh8/lecun_feather_berts-63
b8dd5a2863e750ab6221e7021e66d910b246b2de
2022-06-04T06:53:03.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/lecun_feather_berts-63
6
null
transformers
15,729
Entry not found
Jeevesh8/lecun_feather_berts-12
977691e4e4c5705db6de9a0861141b5bd87736ac
2022-06-04T06:52:11.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/lecun_feather_berts-12
6
null
transformers
15,730
Entry not found
gciaffoni/wav2vec2-large-xls-r-300m-it-colab4
652ef1fda97e390d10e4c084db5ac42a2908c1aa
2022-07-20T15:40:53.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
gciaffoni
null
gciaffoni/wav2vec2-large-xls-r-300m-it-colab4
6
null
transformers
15,731
R4 checkpoint-16000
nitishkumargundapu793/autotrain-chat-bot-responses-949231426
b6c36ec3efd5e5fc252f3d9f8409ec0cf5f9ae5b
2022-06-05T03:16:21.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:nitishkumargundapu793/autotrain-data-chat-bot-responses", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
nitishkumargundapu793
null
nitishkumargundapu793/autotrain-chat-bot-responses-949231426
6
null
transformers
15,732
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - nitishkumargundapu793/autotrain-data-chat-bot-responses co2_eq_emissions: 0.01123534537751425 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 949231426 - CO2 Emissions (in grams): 0.01123534537751425 ## Validation Metrics - Loss: 0.26922607421875 - Accuracy: 1.0 - Macro F1: 1.0 - Micro F1: 1.0 - Weighted F1: 1.0 - Macro Precision: 1.0 - Micro Precision: 1.0 - Weighted Precision: 1.0 - Macro Recall: 1.0 - Micro Recall: 1.0 - Weighted Recall: 1.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/nitishkumargundapu793/autotrain-chat-bot-responses-949231426 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("nitishkumargundapu793/autotrain-chat-bot-responses-949231426", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("nitishkumargundapu793/autotrain-chat-bot-responses-949231426", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
diiogo/caju128k
fd2f78e48076bd7c1c14c688acd285f63fa5c115
2022-07-25T21:38:53.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
diiogo
null
diiogo/caju128k
6
null
transformers
15,733
Entry not found
nestoralvaro/mT5_multilingual_XLSum-finetuned-xsum-mlsum___summary_text
2fbc2c0e50405cfa97bcf8f295c526cc285cbeee
2022-06-06T03:26:11.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "dataset:mlsum", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
nestoralvaro
null
nestoralvaro/mT5_multilingual_XLSum-finetuned-xsum-mlsum___summary_text
6
null
transformers
15,734
--- tags: - generated_from_trainer datasets: - mlsum metrics: - rouge model-index: - name: mT5_multilingual_XLSum-finetuned-xsum-mlsum___summary_text results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: mlsum type: mlsum args: es metrics: - name: Rouge1 type: rouge value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5_multilingual_XLSum-finetuned-xsum-mlsum___summary_text This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the mlsum dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 66592 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
bondi/bert-semaphore-prediction-w2
f3c0d329fc131043b06137505711f56baf2ca66a
2022-06-06T02:34:15.000Z
[ "pytorch", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
bondi
null
bondi/bert-semaphore-prediction-w2
6
null
transformers
15,735
--- tags: - generated_from_trainer model-index: - name: bert-semaphore-prediction-w2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-semaphore-prediction-w2 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum-v3
da235f3be1584e4e70ba5569579676eb6cbfbc1c
2022-06-06T09:42:56.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
yogeshchandrasekharuni
null
yogeshchandrasekharuni/bart-paraphrase-finetuned-xsum-v3
6
null
transformers
15,736
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-paraphrase-finetuned-xsum-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-finetuned-xsum-v3 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3377 - Rouge1: 99.9461 - Rouge2: 72.6619 - Rougel: 99.9461 - Rougelsum: 99.9461 - Gen Len: 9.0396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 139 | 0.3653 | 96.4972 | 70.8271 | 96.5252 | 96.5085 | 9.7158 | | No log | 2.0 | 278 | 0.6624 | 98.3228 | 72.2829 | 98.2598 | 98.2519 | 9.0612 | | No log | 3.0 | 417 | 0.2880 | 98.2415 | 72.36 | 98.249 | 98.2271 | 9.4496 | | 0.5019 | 4.0 | 556 | 0.4188 | 98.1123 | 70.8536 | 98.0746 | 98.0465 | 9.4065 | | 0.5019 | 5.0 | 695 | 0.3718 | 98.8882 | 72.6619 | 98.8997 | 98.8882 | 10.7842 | | 0.5019 | 6.0 | 834 | 0.4442 | 99.6076 | 72.6619 | 99.6076 | 99.598 | 9.0647 | | 0.5019 | 7.0 | 973 | 0.2681 | 99.6076 | 72.6619 | 99.598 | 99.598 | 9.1403 | | 0.2751 | 8.0 | 1112 | 0.3577 | 99.2479 | 72.6619 | 99.2536 | 99.2383 | 9.0612 | | 0.2751 | 9.0 | 1251 | 0.2481 | 98.8785 | 72.6394 | 98.8882 | 98.8882 | 9.7914 | | 0.2751 | 10.0 | 1390 | 0.2339 | 99.6076 | 72.6619 | 99.6076 | 99.6076 | 9.1942 | | 0.2051 | 11.0 | 1529 | 0.2472 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.2338 | | 0.2051 | 12.0 | 1668 | 0.3948 | 99.6076 | 72.6619 | 99.598 | 99.598 | 9.0468 | | 0.2051 | 13.0 | 1807 | 0.4756 | 99.6076 | 72.6619 | 99.6076 | 99.6076 | 9.0576 | | 0.2051 | 14.0 | 1946 | 0.3543 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1544 | 15.0 | 2085 | 0.2828 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0576 | | 0.1544 | 16.0 | 2224 | 0.2456 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.1079 | | 0.1544 | 17.0 | 2363 | 0.2227 | 99.9461 | 72.6394 | 99.9461 | 99.9461 | 9.5072 | | 0.1285 | 18.0 | 2502 | 0.3490 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1285 | 19.0 | 2641 | 0.3736 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1285 | 20.0 | 2780 | 0.3377 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
kabelomalapane/Af-En
667202b7a92cbc4d0341fd0c31474cabef4a643a
2022-06-06T13:14:27.000Z
[ "pytorch", "marian", "text2text-generation", "transformers", "translation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
translation
false
kabelomalapane
null
kabelomalapane/Af-En
6
null
transformers
15,737
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: En-Af results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # En-Af This model is a fine-tuned version of [Helsinki-NLP/opus-mt-af-en](https://huggingface.co/Helsinki-NLP/opus-mt-en-af) on the None dataset. It achieves the following results on the evaluation set: Before training: - 'eval_bleu': 46.1522519 - 'eval_loss': 2.5693612 After training: - Loss: 1.7516168 - Bleu: 55.3924697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
mpsb00/ECHR_test_2
2cfbcfb35e82870d93454e768195f54840f12c1d
2022-06-06T11:17:21.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:lex_glue", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
mpsb00
null
mpsb00/ECHR_test_2
6
null
transformers
15,738
--- license: mit tags: - generated_from_trainer datasets: - lex_glue model-index: - name: ECHR_test_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ECHR_test_2 This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the lex_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2487 - Macro-f1: 0.4052 - Micro-f1: 0.5660 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.2056 | 0.44 | 500 | 0.2846 | 0.3335 | 0.4763 | | 0.1698 | 0.89 | 1000 | 0.2487 | 0.4052 | 0.5660 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
marieke93/BERT-evidence-types
f0f2696c4ca1c7405fe227bcb439fb1fd40b7aac
2022-06-11T13:32:10.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
marieke93
null
marieke93/BERT-evidence-types
6
null
transformers
15,739
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: BERT-evidence-types results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT-evidence-types This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the evidence types dataset. It achieves the following results on the evaluation set: - Loss: 2.8008 - Macro f1: 0.4227 - Weighted f1: 0.6976 - Accuracy: 0.7154 - Balanced accuracy: 0.3876 ## Training and evaluation data The data set, as well as the code that was used to fine tune this model can be found in the GitHub repository [BA-Thesis-Information-Science-Persuasion-Strategies](https://github.com/mariekevdh/BA-Thesis-Information-Science-Persuasion-Strategies) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Macro f1 | Weighted f1 | Accuracy | Balanced accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:-----------------:| | 1.1148 | 1.0 | 125 | 1.0531 | 0.2566 | 0.6570 | 0.6705 | 0.2753 | | 0.7546 | 2.0 | 250 | 0.9725 | 0.3424 | 0.6947 | 0.7002 | 0.3334 | | 0.4757 | 3.0 | 375 | 1.1375 | 0.3727 | 0.7113 | 0.7184 | 0.3680 | | 0.2637 | 4.0 | 500 | 1.3585 | 0.3807 | 0.6836 | 0.6910 | 0.3805 | | 0.1408 | 5.0 | 625 | 1.6605 | 0.3785 | 0.6765 | 0.6872 | 0.3635 | | 0.0856 | 6.0 | 750 | 1.9703 | 0.3802 | 0.6890 | 0.7047 | 0.3704 | | 0.0502 | 7.0 | 875 | 2.1245 | 0.4067 | 0.6995 | 0.7169 | 0.3751 | | 0.0265 | 8.0 | 1000 | 2.2676 | 0.3756 | 0.6816 | 0.6925 | 0.3647 | | 0.0147 | 9.0 | 1125 | 2.4286 | 0.4052 | 0.6887 | 0.7062 | 0.3803 | | 0.0124 | 10.0 | 1250 | 2.5773 | 0.4084 | 0.6853 | 0.7040 | 0.3695 | | 0.0111 | 11.0 | 1375 | 2.5941 | 0.4146 | 0.6915 | 0.7085 | 0.3834 | | 0.0076 | 12.0 | 1500 | 2.6124 | 0.4157 | 0.6936 | 0.7078 | 0.3863 | | 0.0067 | 13.0 | 1625 | 2.7050 | 0.4139 | 0.6925 | 0.7108 | 0.3798 | | 0.0087 | 14.0 | 1750 | 2.6695 | 0.4252 | 0.7009 | 0.7169 | 0.3920 | | 0.0056 | 15.0 | 1875 | 2.7357 | 0.4257 | 0.6985 | 0.7161 | 0.3868 | | 0.0054 | 16.0 | 2000 | 2.7389 | 0.4249 | 0.6955 | 0.7116 | 0.3890 | | 0.0051 | 17.0 | 2125 | 2.7767 | 0.4197 | 0.6967 | 0.7146 | 0.3863 | | 0.004 | 18.0 | 2250 | 2.7947 | 0.4211 | 0.6977 | 0.7154 | 0.3876 | | 0.0041 | 19.0 | 2375 | 2.8030 | 0.4204 | 0.6953 | 0.7131 | 0.3855 | | 0.0042 | 20.0 | 2500 | 2.8008 | 0.4227 | 0.6976 | 0.7154 | 0.3876 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
tzq0301/T5-Pegasus-news-title-generation
350d5d75eb8f8215e60e40a56ae408e68982d2b3
2022-06-09T06:56:58.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tzq0301
null
tzq0301/T5-Pegasus-news-title-generation
6
null
transformers
15,740
Entry not found
catofnull/BERT-Pretrain
5558b20bba5d6b2d67decada899ea47bf2b312d0
2022-06-08T16:30:49.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
catofnull
null
catofnull/BERT-Pretrain
6
null
transformers
15,741
Entry not found
blenderwang/roberta-base-emotion-32-balanced
0e11d80f17b2c7cba77f6789eab40881026bcde8
2022-06-09T08:34:08.000Z
[ "pytorch", "roberta", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
blenderwang
null
blenderwang/roberta-base-emotion-32-balanced
6
null
transformers
15,742
--- tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
russellc/bert-finetuned-ner-accelerate
f402fd7e21cc54b2838f57af196e9985fe39bb09
2022-06-09T11:22:12.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
russellc
null
russellc/bert-finetuned-ner-accelerate
6
null
transformers
15,743
Entry not found
qualitydatalab/autotrain-car-review-project-966432121
f28298b06ea8fe9f30ca78fa5a5c57ee7cb08368
2022-06-09T13:04:21.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:qualitydatalab/autotrain-data-car-review-project", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
qualitydatalab
null
qualitydatalab/autotrain-car-review-project-966432121
6
1
transformers
15,744
--- tags: autotrain language: en widget: - text: "I love driving this car" datasets: - qualitydatalab/autotrain-data-car-review-project co2_eq_emissions: 0.21529888368377176 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 966432121 - CO2 Emissions (in grams): 0.21529888368377176 ## Validation Metrics - Loss: 0.6013365983963013 - Accuracy: 0.737791286727457 - Macro F1: 0.729171012281939 - Micro F1: 0.737791286727457 - Weighted F1: 0.729171012281939 - Macro Precision: 0.7313770127538427 - Micro Precision: 0.737791286727457 - Weighted Precision: 0.7313770127538428 - Macro Recall: 0.737791286727457 - Micro Recall: 0.737791286727457 - Weighted Recall: 0.737791286727457 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love driving this car"}' https://api-inference.huggingface.co/models/qualitydatalab/autotrain-car-review-project-966432121 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("qualitydatalab/autotrain-car-review-project-966432121", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("qualitydatalab/autotrain-car-review-project-966432121", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
HrayrMSint/distilbert-base-uncased-finetuned-clinc
e8dac9ebfb82bc4a7e62eae78373d3d25509d05d
2022-06-10T01:17:59.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:clinc_oos", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
HrayrMSint
null
HrayrMSint/distilbert-base-uncased-finetuned-clinc
6
null
transformers
15,745
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9135483870967742 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7771 - Accuracy: 0.9135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2843 | 1.0 | 318 | 3.2793 | 0.7448 | | 2.6208 | 2.0 | 636 | 1.8750 | 0.8297 | | 1.5453 | 3.0 | 954 | 1.1565 | 0.8919 | | 1.0141 | 4.0 | 1272 | 0.8628 | 0.9090 | | 0.795 | 5.0 | 1590 | 0.7771 | 0.9135 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0 - Datasets 2.2.2 - Tokenizers 0.10.3
juliensimon/distilbert-amazon-shoe-reviews-quantized
e9fc155e18ac3294de8cab3090a00b6f7b07e307
2022-06-10T11:24:00.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
juliensimon
null
juliensimon/distilbert-amazon-shoe-reviews-quantized
6
null
transformers
15,746
Entry not found
Jeevesh8/std_pnt_04_feather_berts-24
f2765d909e423c944ef90ad16fae304a87215956
2022-06-12T06:02:57.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_pnt_04_feather_berts-24
6
null
transformers
15,747
Entry not found
Jeevesh8/std_pnt_04_feather_berts-47
69236732538b82e8d213bd3a47f44c7c83bb676a
2022-06-12T06:03:10.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_pnt_04_feather_berts-47
6
null
transformers
15,748
Entry not found
Jeevesh8/std_pnt_04_feather_berts-66
d742fe6904b663be0c8cb6fd0f5262296f1804d1
2022-06-12T06:03:01.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
Jeevesh8
null
Jeevesh8/std_pnt_04_feather_berts-66
6
null
transformers
15,749
Entry not found
Jingya/tmpkplizo4c
fd332ea15d2ce4daaf302c4b4fb72a42ca0929a3
2022-06-12T22:05:38.000Z
[ "pytorch", "bert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Jingya
null
Jingya/tmpkplizo4c
6
null
transformers
15,750
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue model-index: - name: tmpkplizo4c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmpkplizo4c This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
ghadeermobasher/CRAFT-Original-BioBERT-384
ae4f506b96604622115c7963cb95b5dedc681b24
2022-06-13T17:26:35.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Original-BioBERT-384
6
null
transformers
15,751
Entry not found
ghadeermobasher/CRAFT-Original-BioBERT-512
305996f49c9e6dbcccfdaab5c2fb2739f2b546f6
2022-06-13T18:34:17.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Original-BioBERT-512
6
null
transformers
15,752
Entry not found
ghadeermobasher/CRAFT-Modified-BioBERT-512
75676aa0d99b3596376913a7d129db7f8ef34ae1
2022-06-13T20:39:14.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Modified-BioBERT-512
6
null
transformers
15,753
Entry not found
ghadeermobasher/CRAFT-Modified-BioBERT-384
46832fb8bce26d9f009c216ab0e98e3d3e2b3956
2022-06-13T19:31:54.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Modified-BioBERT-384
6
null
transformers
15,754
Entry not found
ghadeermobasher/CRAFT-Original-PubMedBERT-384
44a466d97945e45590cd03da6719eb5642eef67a
2022-06-13T22:54:01.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Original-PubMedBERT-384
6
null
transformers
15,755
Entry not found
ghadeermobasher/CRAFT-Original-BlueBERT-384
90b71dd283b9325655bdf8d54f700d7e0e6fbbd9
2022-06-13T22:55:00.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Original-BlueBERT-384
6
null
transformers
15,756
Entry not found
ghadeermobasher/CRAFT-Original-SciBERT-512
397f81f2b4f35b494c00dfc9df49ba3afcd75c67
2022-06-14T00:10:32.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Original-SciBERT-512
6
null
transformers
15,757
Entry not found
ghadeermobasher/CRAFT-Modified-PubMedBERT-384
0a5f16233a04fa588bbc0913406d380cd098b02f
2022-06-13T23:04:15.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Modified-PubMedBERT-384
6
null
transformers
15,758
Entry not found
ghadeermobasher/CRAFT-Modified-PubMedBERT-512
0d115bc62259e3d4e3f7639405d51ae023ba7edd
2022-06-14T00:12:42.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Modified-PubMedBERT-512
6
null
transformers
15,759
Entry not found
ghadeermobasher/CRAFT-Modified-SciBERT-384
832fdb32ed960decc5fb2a235e2fbce9ddeb6c6d
2022-06-13T23:11:55.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/CRAFT-Modified-SciBERT-384
6
null
transformers
15,760
Entry not found
ghadeermobasher/BioNLP13-Modified-SciBERT-512
cc1047c31bf271a791901203aca3a60cb997e4e2
2022-06-13T22:05:33.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioNLP13-Modified-SciBERT-512
6
null
transformers
15,761
Entry not found
ghadeermobasher/BioNLP13-Modified-BioBERT-512
8ae13a3ba92ea4433775a641e28cbfde510b34c5
2022-06-13T22:13:29.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BioNLP13-Modified-BioBERT-512
6
null
transformers
15,762
Entry not found
ghadeermobasher/BIONLP13CG-CHEM-Chem-Original-SciBERT-512
dfa0329ddf160b59978a7b940030941989d876a3
2022-06-13T23:17:37.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BIONLP13CG-CHEM-Chem-Original-SciBERT-512
6
null
transformers
15,763
Entry not found
ghadeermobasher/BIONLP13CG-CHEM-Chem-Original-BlueBERT-384
7c881cdee33a9ea1ac9570389d1d98d5f5cf6197
2022-06-14T00:04:04.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
ghadeermobasher
null
ghadeermobasher/BIONLP13CG-CHEM-Chem-Original-BlueBERT-384
6
null
transformers
15,764
Entry not found
ahmeddbahaa/mt5-base-finetuned-fa
38e05253db3bf24c1f6df812f0735d11ab86c35f
2022-06-14T17:07:35.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "dataset:pn_summary", "transformers", "summarization", "fa", "Abstractive Summarization", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
ahmeddbahaa
null
ahmeddbahaa/mt5-base-finetuned-fa
6
null
transformers
15,765
--- license: apache-2.0 tags: - summarization - fa - mt5 - Abstractive Summarization - generated_from_trainer datasets: - pn_summary model-index: - name: mt5-base-finetuned-fa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-fa This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the pn_summary dataset. It achieves the following results on the evaluation set: - Loss: 2.6477 - Rouge-1: 33.7 - Rouge-2: 21.28 - Rouge-l: 31.69 - Gen Len: 19.0 - Bertscore: 74.52 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 5 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 3.3828 | 1.0 | 1875 | 2.8114 | 32.17 | 19.47 | 30.12 | 18.99 | 74.25 | | 2.8204 | 2.0 | 3750 | 2.7080 | 32.67 | 19.92 | 30.56 | 19.0 | 74.31 | | 2.6907 | 3.0 | 5625 | 2.6724 | 33.22 | 20.44 | 31.11 | 19.0 | 74.47 | | 2.6029 | 4.0 | 7500 | 2.6513 | 33.46 | 20.75 | 31.38 | 19.0 | 74.54 | | 2.5414 | 5.0 | 9375 | 2.6477 | 33.68 | 20.91 | 31.62 | 19.0 | 74.58 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
imosnoi/it_sn
611624a4c96bb7e558ea419f128135c6c0d96180
2022-06-14T08:32:27.000Z
[ "pytorch", "layoutlm", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
imosnoi
null
imosnoi/it_sn
6
null
transformers
15,766
Entry not found
erickfm/denim-sweep-2
93948e60fdd893a4e3a7695ac1eb5f1595b92f58
2022-06-15T03:58:37.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
erickfm
null
erickfm/denim-sweep-2
6
null
transformers
15,767
Entry not found
roscazo/gpt2-covid
3d02e3a0c29f605f7e21b4c058d44913851e9ffe
2022-06-15T09:46:02.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
roscazo
null
roscazo/gpt2-covid
6
null
transformers
15,768
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: gpt2-covid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-covid This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-base-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-base-bne) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.0 - Tokenizers 0.12.1
Alireza1044/mobilebert_stsb
f2ca96d08df152af065bf2d2ef998bd0566149cf
2022-06-15T15:37:52.000Z
[ "pytorch", "tensorboard", "mobilebert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Alireza1044
null
Alireza1044/mobilebert_stsb
6
null
transformers
15,769
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8735136732190296 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stsb This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.5348 - Pearson: 0.8773 - Spearmanr: 0.8735 - Combined Score: 0.8754 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
Willy/bert-base-spanish-wwm-cased-finetuned-emotion
53907f55f935030188b0ac7a77ef5ab99466aebd
2022-06-15T23:22:27.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
Willy
null
Willy/bert-base-spanish-wwm-cased-finetuned-emotion
6
null
transformers
15,770
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-spanish-wwm-cased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-emotion This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5558 - Accuracy: 0.7630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5414 | 1.0 | 67 | 0.5677 | 0.7481 | | 0.5482 | 2.0 | 134 | 0.5558 | 0.7630 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft
abc4c0b9eda81b52cefde50971da07e3c94f5dcf
2022-07-09T06:08:46.000Z
[ "pytorch", "swinv2", "transformers" ]
null
false
microsoft
null
microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft
6
null
transformers
15,771
Entry not found
erickfm/major-sweep-2
55c0aedc4f2b913eeb0bc32dfdc351b7acb4f9f0
2022-06-16T21:00:36.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
erickfm
null
erickfm/major-sweep-2
6
null
transformers
15,772
Entry not found
fanxiao/ext-bart-chinese-cndbpedia
6a9e65e22de9dfa19a40ad64ed04ca624fe5a954
2022-06-17T03:18:22.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
fanxiao
null
fanxiao/ext-bart-chinese-cndbpedia
6
null
transformers
15,773
rebel-base-chinese-cndbpedia is a generation-based relation extraction model ·a SOTA chinese end-to-end relation extraction model,using bart as backbone. ·using the training method of <REBEL:Relation Extraction By End-to-end Language generation>(EMNLP Findings 2021). ·using the Distant-supervised data from cndbpedia,pretrained from the checkpoint of fnlp/bart-base-chinese. ·can perform SOTA in many chinese relation extraction dataset,such as lic2019,lic2020,HacRED,etc. ·easy to use,just like normal generation task. ·input is sentence,and output is linearlize triples,such as input:姚明是一名NBA篮球运动员 output:[subj]姚明[obj]NBA[rel]公司[obj]篮球运动员[rel]职业(more details can read on REBEL paper) using model: from transformers import BertTokenizer, BartForConditionalGeneration model_name = 'fnlp/bart-base-chinese' tokenizer_kwargs = { "use_fast": True, "additional_special_tokens": ['<rel>', '<obj>', '<subj>'], } # if cannot see tokens in model card please open readme file tokenizer = BertTokenizer.from_pretrained(model_name, **tokenizer_kwargs) model = BartForConditionalGeneration.from_pretrained("fanxiao/rebel-base-chinese-cndbpedia")
mariolinml/bert-finetuned-ner_0
3c83917bd3a1521133408991613581b23a2743df
2022-06-17T13:45:51.000Z
[ "pytorch", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
mariolinml
null
mariolinml/bert-finetuned-ner_0
6
null
transformers
15,774
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner_0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner_0 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2298 - Precision: 0.5119 - Recall: 0.4222 - F1: 0.4627 - Accuracy: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 250 | 0.2364 | 0.4874 | 0.2996 | 0.3711 | 0.9186 | | 0.2444 | 2.0 | 500 | 0.2219 | 0.5112 | 0.3887 | 0.4416 | 0.9233 | | 0.2444 | 3.0 | 750 | 0.2298 | 0.5119 | 0.4222 | 0.4627 | 0.9246 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
powerwarez/kindword-klue_bert-base
7f0a42617d4d11b644125601569898c590137380
2022-06-20T00:44:42.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
powerwarez
null
powerwarez/kindword-klue_bert-base
6
null
transformers
15,775
--- license: apache-2.0 --- klue-bert-base에 스마일게이트 욕설데이터를 FineTune한 모델입니다.
sasuke/distilbert-base-uncased-finetuned-squad
24fd759dfffd79de018727da5044159854666774
2022-06-20T03:46:26.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
sasuke
null
sasuke/distilbert-base-uncased-finetuned-squad
6
null
transformers
15,776
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2997 | 1.0 | 2767 | 1.1918 | | 1.0491 | 2.0 | 5534 | 1.1328 | | 0.8768 | 3.0 | 8301 | 1.1458 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
asahi417/lmqg-mt5-small-squad
aa527a72fe59650499cbf57322953479cba1f168
2022-06-21T17:01:18.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
asahi417
null
asahi417/lmqg-mt5-small-squad
6
null
transformers
15,777
Entry not found
Renukswamy/minilm-uncased-squad2-finetuned-squad
ad9f23a96eb37a91c7a25d817ea19be980fa48ff
2022-06-18T16:29:13.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "transformers", "generated_from_trainer", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
Renukswamy
null
Renukswamy/minilm-uncased-squad2-finetuned-squad
6
null
transformers
15,778
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: minilm-uncased-squad2-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # minilm-uncased-squad2-finetuned-squad This model is a fine-tuned version of [deepset/minilm-uncased-squad2](https://huggingface.co/deepset/minilm-uncased-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.7163 | 1.0 | 6941 | 0.6917 | | 0.5752 | 2.0 | 13882 | 0.7030 | | 0.4957 | 3.0 | 20823 | 0.7239 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
dibsondivya/distilbert-phmtweets-sutd
7f8381afdeeb49c1cf21f8873407d0ef25374293
2022-06-19T11:40:42.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:custom-phm-tweets", "arxiv:1802.09130", "transformers", "health", "tweet", "model-index" ]
text-classification
false
dibsondivya
null
dibsondivya/distilbert-phmtweets-sutd
6
null
transformers
15,779
--- tags: - distilbert - health - tweet datasets: - custom-phm-tweets metrics: - accuracy model-index: - name: distilbert-phmtweets-sutd results: - task: name: Text Classification type: text-classification dataset: name: custom-phm-tweets type: labelled metrics: - name: Accuracy type: accuracy value: 0.877 --- # distilbert-phmtweets-sutd This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) for text classification to identify public health events through tweets. The project was based on an [Emory University Study on Detection of Personal Health Mentions in Social Media paper](https://arxiv.org/pdf/1802.09130v2.pdf), that worked with this [custom dataset](https://github.com/emory-irlab/PHM2017). It achieves the following results on the evaluation set: - Accuracy: 0.877 ## Usage ```Python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dibsondivya/distilbert-phmtweets-sutd") model = AutoModelForSequenceClassification.from_pretrained("dibsondivya/distilbert-phmtweets-sutd") ``` ### Model Evaluation Results With Validation Set - Accuracy: 0.8708661417322835 With Test Set - Accuracy: 0.8772961058045555 # Reference for distilbert-base-uncased Model ```bibtex @article{Sanh2019DistilBERTAD, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, journal={ArXiv}, year={2019}, volume={abs/1910.01108} } ```
Alireza1044/MobileBERT_Theseus-rte
9ccd83c743ac3817226fd8d36aa2140e3af54795
2022-06-19T12:12:52.000Z
[ "pytorch", "mobilebert", "text-classification", "transformers" ]
text-classification
false
Alireza1044
null
Alireza1044/MobileBERT_Theseus-rte
6
null
transformers
15,780
Entry not found
jhmin/finetuning-sentiment-model-3000-samples
429fb1a2dde6d4a4b763a4ee1d8f9cff74571388
2022-06-19T13:37:55.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
jhmin
null
jhmin/finetuning-sentiment-model-3000-samples
6
null
transformers
15,781
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8666666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3144 - Accuracy: 0.8667 - F1: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
EventMiner/xlm-roberta-large-en-doc
2012b3a0db9d48d46c878c6c6f536ef97febf0b6
2022-06-19T15:42:30.000Z
[ "pytorch", "xlm-roberta", "text-classification", "multilingual", "transformers", "news event detection", "document level", "EventMiner", "license:apache-2.0" ]
text-classification
false
EventMiner
null
EventMiner/xlm-roberta-large-en-doc
6
null
transformers
15,782
--- language: multilingual tags: - news event detection - document level - EventMiner license: apache-2.0 --- # EventMiner EventMiner is designed for multilingual news event detection. The goal of news event detection is the automatic extraction of event details from news articles. This event extraction can be done at different levels: document, sentence and word ranging from coarse-granular information to fine-granular information. We submitted the best results based on EventMiner to [CASE 2021 shared task 1: *Multilingual Protest News Detection*](https://competitions.codalab.org/competitions/31247). Our approach won first place in English for the document level task while ranking within the top four solutions for other languages: Portuguese, Spanish, and Hindi. *EventMiner/xlm-roberta-large-en-doc* is an xlm-roberta-large sequence classification model fine-tuned on English document level data of the multilingual version of GLOCON gold standard dataset released with [CASE 2021](https://aclanthology.org/2021.case-1.11/). <br> Labels: - Label_0: News article does not contain information about a past or ongoing socio-political event - Label_1: News article contains information about a past or ongoing socio-political event More details about the training procedure are available with our [codebase](https://github.com/HHansi/EventMiner). # How to Use ## Load Model ```python from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification model_name = 'EventMiner/xlm-roberta-large-en-doc' tokenizer = XLMRobertaTokenizer.from_pretrained(model_name) model = XLMRobertaForSequenceClassification.from_pretrained(model_name) ``` ## Classification ```python from transformers import pipeline classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) classifier("Police arrested five more student leaders on Monday when implementing the strike call given by MSU students union as a mark of protest against the decision to introduce payment seats in first-year commerce programme.") ``` # Citation If you use this model, please consider citing the following paper. ``` @inproceedings{hettiarachchi-etal-2021-daai, title = "{DAAI} at {CASE} 2021 Task 1: Transformer-based Multilingual Socio-political and Crisis Event Detection", author = "Hettiarachchi, Hansi and Adedoyin-Olowe, Mariam and Bhogal, Jagdev and Gaber, Mohamed Medhat", booktitle = "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.case-1.16", doi = "10.18653/v1/2021.case-1.16", pages = "120--130", } ```
thaidv96/lead-reliability-scoring
9df9db90b661788b4f07176423c806c741bb5206
2022-06-19T16:15:46.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
thaidv96
null
thaidv96/lead-reliability-scoring
6
null
transformers
15,783
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: lead-reliability-scoring results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lead-reliability-scoring This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0123 - F1: 0.9937 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 50 | 0.3866 | 0.5761 | | No log | 2.0 | 100 | 0.3352 | 0.6538 | | No log | 3.0 | 150 | 0.1786 | 0.8283 | | No log | 4.0 | 200 | 0.1862 | 0.8345 | | No log | 5.0 | 250 | 0.1367 | 0.8736 | | No log | 6.0 | 300 | 0.0642 | 0.9477 | | No log | 7.0 | 350 | 0.0343 | 0.9748 | | No log | 8.0 | 400 | 0.0190 | 0.9874 | | No log | 9.0 | 450 | 0.0123 | 0.9937 | | 0.2051 | 10.0 | 500 | 0.0058 | 0.9937 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
icelab/cosmicroberta
ff7eb9c95291d3c522fe84b8ba86aff392ebbeec
2022-06-20T09:14:41.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "license:mpl-2.0", "autotrain_compatible" ]
fill-mask
false
icelab
null
icelab/cosmicroberta
6
null
transformers
15,784
--- license: mpl-2.0 widget: - text: "The closest planet to earth is <mask>." - text: "Electrical power is stored on a spacecraft with <mask>." --- ### CosmicRoBERTa This model is a further pre-trained version of RoBERTa for space science on a domain-specific corpus, which includes abstracts from the NTRS library, abstracts from SCOPUS, ECSS requirements, and other sources from this domain. The model performs slightly better on a subset (0.6 of total data set) of the CR task presented in our paper [SpaceTransformers: Language Modeling for Space Systems](https://ieeexplore.ieee.org/document/9548078). | | RoBERTa | CosmiRoBERTa | SpaceRoBERTa | |-----------------------------------------------|----------------|---------------------|---------------------| | Parameter | 0.475 | 0.515 | 0.485 | | GN&C | 0.488 | 0.609 | 0.602 | | System engineering | 0.523 | 0.559 | 0.555 | | Propulsion | 0.403 | 0.521 | 0.465 | | Project Scope | 0.493 | 0.541 | 0.497 | | OBDH | 0.717 | 0.789 | 0.794 | | Thermal | 0.432 | 0.509 | 0.491 | | Quality control | 0.686 | 0.704 | 0.678 | | Telecom. | 0.360 | 0.614 | 0.557 | | Measurement | 0.833 | 0.849 | 0.858 | | Structure & Mechanism | 0.489 | 0.581 | 0.566 | | Space Environment | 0.543 | 0.681 | 0.605 | | Cleanliness | 0.616 | 0.621 | 0.651 | | Project Organisation / Documentation | 0.355 | 0.427 | 0.429 | | Power | 0.638 | 0.735 | 0.661 | | Safety / Risk (Control) | 0.647 | 0.727 | 0.676 | | Materials / EEEs | 0.585 | 0.642 | 0.639 | | Nonconformity | 0.365 | 0.333 | 0.419 | | weighted | 0.584 | 0.652(+7%) | 0.633(+5%) | | Valid. Loss | 0.605 | 0.505 | 0.542 | ### BibTeX entry and citation info ``` @ARTICLE{ 9548078, author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa}, journal={IEEE Access}, title={SpaceTransformers: Language Modeling for Space Systems}, year={2021}, volume={9}, number={}, pages={133111-133122}, doi={10.1109/ACCESS.2021.3115659} } ```
Splend1dchan/wav2vec2-large-lv60_mt5-base_textdecoderonly_bs64
86d13ed630e3597c13d09aec599b34b2501b497c
2022-06-22T14:38:01.000Z
[ "pytorch", "speechmix", "transformers" ]
null
false
Splend1dchan
null
Splend1dchan/wav2vec2-large-lv60_mt5-base_textdecoderonly_bs64
6
null
transformers
15,785
Entry not found
anjankumar/Anjan-finetuned-iitbombay-en-to-hi
2edb31a923a8781bab5d9b9fb54188922856f62b
2022-06-21T11:20:50.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "translation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
translation
false
anjankumar
null
anjankumar/Anjan-finetuned-iitbombay-en-to-hi
6
1
transformers
15,786
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: Anjan-finetuned-iitbombay-en-to-hi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Anjan-finetuned-iitbombay-en-to-hi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7924 - Bleu: 6.3001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Adapting/dialog_sentiment_classifier
020f8307e053573abb67bfb4fa63ce6ec58b1c9a
2022-06-28T20:12:58.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
Adapting
null
Adapting/dialog_sentiment_classifier
6
null
transformers
15,787
colab used to train this model: https://colab.research.google.com/drive/1txlzTh9bdAHVSt229Nbip6dtkYvDbWFj?usp=sharing
Motahar/clickbait-csebert
78abd746e12d0884a02b9e8853a37998464ac0a7
2022-06-23T17:22:49.000Z
[ "pytorch", "ganbert", "transformers" ]
null
false
Motahar
null
Motahar/clickbait-csebert
6
null
transformers
15,788
Entry not found
BigSalmon/InformalToFormalLincoln52
c8cd12098579fcc20a07ff2796a8af6cc33178ce
2022-06-23T02:02:34.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/InformalToFormalLincoln52
6
null
transformers
15,789
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln52") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln52") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - nebraska - unicamerical legislature - different from federal house and senate text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ``` ``` input: not loyal 1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ). *** input: ```
winson/bert-finetuned-ner-accelerate
b6e12a8876448210bda61e5c8728f58cd60badd0
2022-06-23T10:49:43.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
winson
null
winson/bert-finetuned-ner-accelerate
6
null
transformers
15,790
Nothing, just from tutorial
kidzy/distilbert-base-uncased-finetuned-emotion
627312aa2d7ed31f91815c052e08491b97296830
2022-06-26T08:19:59.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
kidzy
null
kidzy/distilbert-base-uncased-finetuned-emotion
6
1
transformers
15,791
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9246037761691881 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2240 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8521 | 1.0 | 250 | 0.3285 | 0.904 | 0.9017 | | 0.2546 | 2.0 | 500 | 0.2240 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
404E/autotrain-formality-1026434913
9422eb7ab20af3ee09786c7c0a4762976cb8d117
2022-06-23T15:19:21.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:404E/autotrain-data-formality", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
404E
null
404E/autotrain-formality-1026434913
6
null
transformers
15,792
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - 404E/autotrain-data-formality co2_eq_emissions: 7.300283563922049 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 1026434913 - CO2 Emissions (in grams): 7.300283563922049 ## Validation Metrics - Loss: 0.5467672348022461 - MSE: 0.5467672944068909 - MAE: 0.5851736068725586 - R2: 0.6883510493648173 - RMSE: 0.7394371628761292 - Explained Variance: 0.6885714530944824 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/404E/autotrain-formality-1026434913 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
enoriega/kw_pubmed_vanilla_document_10000_0.0003_2
31e7046e50ca93184cc7b51489a3b5c117bab5b4
2022-06-25T16:09:59.000Z
[ "pytorch", "tensorboard", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
enoriega
null
enoriega/kw_pubmed_vanilla_document_10000_0.0003_2
6
null
transformers
15,793
Entry not found
doraemon1998/distilgpt2-finetuned-wikitext2
64f4f7fe750349494f206338cf09b3827e40cd50
2022-06-24T09:08:17.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
doraemon1998
null
doraemon1998/distilgpt2-finetuned-wikitext2
6
null
transformers
15,794
Entry not found
pnichite/QAClassification
4f272ce6327fd0d3433147d152d989f685dc9d22
2022-07-07T07:04:11.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
pnichite
null
pnichite/QAClassification
6
null
transformers
15,795
Entry not found
VedantS01/bert-finetuned-custom
73af18ddf2c29a71e76a568f4745a67e3ad1650a
2022-07-01T15:35:59.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
VedantS01
null
VedantS01/bert-finetuned-custom
6
null
transformers
15,796
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-finetuned-custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-custom This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
sasuke/bert-base-uncased-finetuned-claqua_cqa_predicate
2f5f6654bf9826c72959d719b65135149c459604
2022-06-25T11:36:08.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers" ]
text-classification
false
sasuke
null
sasuke/bert-base-uncased-finetuned-claqua_cqa_predicate
6
null
transformers
15,797
Entry not found
erickfm/happy-sweep-1
b214a4bf268e20f3afeff777b537a90aaf6e2358
2022-06-25T17:57:39.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
erickfm
null
erickfm/happy-sweep-1
6
null
transformers
15,798
Entry not found
rpgz31/tiny-nfl
4a18ca784ccdb5ba3ae0cb98b689d5f37d8f323b
2022-06-25T18:59:14.000Z
[ "pytorch", "gpt2", "text-generation", "dataset:bittensor", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
rpgz31
null
rpgz31/tiny-nfl
6
null
transformers
15,799
--- license: apache-2.0 tags: - generated_from_trainer datasets: - bittensor metrics: - accuracy model-index: - name: tiny-nfl results: - task: name: Causal Language Modeling type: text-generation dataset: name: bittensor tiny.json type: bittensor args: tiny.json metrics: - name: Accuracy type: accuracy value: 0.15555555555555556 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-nfl This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the bittensor tiny.json dataset. It achieves the following results on the evaluation set: - Loss: 6.4602 - Accuracy: 0.1556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.3.3.dev0 - Tokenizers 0.12.1