modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-15 12:29:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
521 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-15 12:28:52
card
stringlengths
11
1.01M
KarelDO/roberta-base.CEBaB_confounding.uniform.absa.5-class.seed_42
KarelDO
2022-10-14T03:31:53Z
32
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T03:27:21Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: roberta-base.CEBaB_confounding.uniform.absa.5-class.seed_42 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE-ABSA type: OpenTable args: opentable-absa metrics: - name: Accuracy type: accuracy value: 0.9024887800897593 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base.CEBaB_confounding.uniform.absa.5-class.seed_42 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.3315 - Accuracy: 0.9025 - Macro-f1: 0.9009 - Weighted-macro-f1: 0.9025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/bert-base-uncased.CEBaB_confounding.observational.sa.5-class.seed_43
KarelDO
2022-10-14T03:30:23Z
32
0
transformers
[ "transformers", "pytorch", "bert", "generated_from_trainer", "en", "dataset:OpenTable", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T03:28:07Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: bert-base-uncased.CEBaB_confounding.observational.sa.5-class.seed_43 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - name: Accuracy type: accuracy value: 0.6592946802151823 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased.CEBaB_confounding.observational.sa.5-class.seed_43 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.8422 - Accuracy: 0.6593 - Macro-f1: 0.6196 - Weighted-macro-f1: 0.6403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/bert-base-uncased.CEBaB_confounding.observational.sa.5-class.seed_42
KarelDO
2022-10-14T03:27:46Z
32
0
transformers
[ "transformers", "pytorch", "bert", "generated_from_trainer", "en", "dataset:OpenTable", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T03:25:26Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: bert-base-uncased.CEBaB_confounding.observational.sa.5-class.seed_42 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - name: Accuracy type: accuracy value: 0.6604901374775852 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased.CEBaB_confounding.observational.sa.5-class.seed_42 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.8234 - Accuracy: 0.6605 - Macro-f1: 0.6242 - Weighted-macro-f1: 0.6524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/roberta-base.CEBaB_confounding.observational.absa.5-class.seed_42
KarelDO
2022-10-14T03:16:45Z
31
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T03:12:08Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: roberta-base.CEBaB_confounding.observational.absa.5-class.seed_42 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE-ABSA type: OpenTable args: opentable-absa metrics: - name: Accuracy type: accuracy value: 0.8867809057527539 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base.CEBaB_confounding.observational.absa.5-class.seed_42 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.4927 - Accuracy: 0.8868 - Macro-f1: 0.8847 - Weighted-macro-f1: 0.8871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/roberta-base.CEBaB_confounding.food_service_positive.sa.5-class.seed_43
KarelDO
2022-10-14T02:34:53Z
32
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T02:32:19Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: roberta-base.CEBaB_confounding.food_service_positive.sa.5-class.seed_43 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - name: Accuracy type: accuracy value: 0.7112970711297071 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base.CEBaB_confounding.food_service_positive.sa.5-class.seed_43 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.7180 - Accuracy: 0.7113 - Macro-f1: 0.6981 - Weighted-macro-f1: 0.7073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/roberta-base.CEBaB_confounding.price_food_ambiance_negative.sa.5-class.seed_42
KarelDO
2022-10-14T02:23:32Z
33
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T02:21:10Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: roberta-base.CEBaB_confounding.price_food_ambiance_negative.sa.5-class.seed_42 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - name: Accuracy type: accuracy value: 0.7352062163777645 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base.CEBaB_confounding.price_food_ambiance_negative.sa.5-class.seed_42 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.6579 - Accuracy: 0.7352 - Macro-f1: 0.7190 - Weighted-macro-f1: 0.7313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/roberta-base.CEBaB_confounding.uniform.sa.5-class.seed_43
KarelDO
2022-10-14T02:17:57Z
33
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T02:15:32Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: roberta-base.CEBaB_confounding.uniform.sa.5-class.seed_43 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - name: Accuracy type: accuracy value: 0.735803945008966 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base.CEBaB_confounding.uniform.sa.5-class.seed_43 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.6596 - Accuracy: 0.7358 - Macro-f1: 0.7204 - Weighted-macro-f1: 0.7325 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/roberta-base.CEBaB_confounding.uniform.sa.5-class.seed_42
KarelDO
2022-10-14T02:15:08Z
31
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T02:12:45Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: roberta-base.CEBaB_confounding.uniform.sa.5-class.seed_42 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - name: Accuracy type: accuracy value: 0.726240286909743 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base.CEBaB_confounding.uniform.sa.5-class.seed_42 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.6956 - Accuracy: 0.7262 - Macro-f1: 0.7053 - Weighted-macro-f1: 0.7201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
KarelDO/roberta-base.CEBaB_confounding.observational.sa.5-class.seed_44
KarelDO
2022-10-14T02:12:19Z
32
0
transformers
[ "transformers", "pytorch", "roberta", "generated_from_trainer", "en", "dataset:OpenTable", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-10-14T02:09:34Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - OpenTable metrics: - accuracy model-index: - name: roberta-base.CEBaB_confounding.observational.sa.5-class.seed_44 results: - task: name: Text Classification type: text-classification dataset: name: OpenTable OPENTABLE type: OpenTable args: opentable metrics: - name: Accuracy type: accuracy value: 0.7190675433353257 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base.CEBaB_confounding.observational.sa.5-class.seed_44 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.7230 - Accuracy: 0.7191 - Macro-f1: 0.7052 - Weighted-macro-f1: 0.7128 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 44 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
MingZhong/unieval-dialog
MingZhong
2022-10-14T01:09:17Z
6,164
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2210.07197", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-11T01:08:51Z
Pre-trained evaluator in EMNLP 2022 paper *[Towards a Unified Multi-Dimensional Evaluator for Text Generation](https://arxiv.org/abs/2210.07197)* ## Introduction **Multi-dimensional evaluation** is the dominant paradigm for human evaluation in Natural Language Generation (NLG), i.e., evaluating the generated text from multiple explainable dimensions, such as coherence and fluency. However, automatic evaluation in NLG is still dominated by similarity-based metrics (e.g., ROUGE, BLEU), but they are not sufficient to portray the difference between the advanced generation models. Therefore, we propose **UniEval** to bridge this gap so that a more comprehensive and fine-grained evaluation of NLG systems can be achieved. ## Pre-trained Evaluator **unieval-dialog** is the pre-trained evaluator for the dialogue response generation task. It can evaluate the model output from five dimensions: - *naturalness* - *coherence* - *engagingness* - *groundedness* - *understandability* ## Usage Please refer to [our GitHub repository](https://github.com/maszhongming/UniEval).
joey234/roberta-base-mnli-negnli
joey234
2022-10-14T00:41:46Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-14T00:31:54Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta2-base-mnli-negnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta2-base-mnli-negnli This model is a fine-tuned version of [sileod/roberta-base-mnli](https://huggingface.co/sileod/roberta-base-mnli) on the GLUE MNLI dataset and the [MNLI subset in NegNLI](https://github.com/mosharafhossain/negation-and-nli/tree/master/data/new_benchmarks/processed_for_run/MNLI). It achieves the following results on the evaluation set: - Loss: 0.8397 - Accuracy: 0.8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.8.0 - Datasets 1.18.3 - Tokenizers 0.12.1
xrverse/distilbert-base-uncased-finetuned-clinc
xrverse
2022-10-14T00:18:12Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-13T13:15:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9164516129032259 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7792 - Accuracy: 0.9165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2938 | 1.0 | 318 | 3.2849 | 0.7365 | | 2.6267 | 2.0 | 636 | 1.8741 | 0.8297 | | 1.5513 | 3.0 | 954 | 1.1612 | 0.8919 | | 1.0185 | 4.0 | 1272 | 0.8625 | 0.9106 | | 0.8046 | 5.0 | 1590 | 0.7792 | 0.9165 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 2.4.0 - Tokenizers 0.10.3
Alex-VisTas/swin-tiny-patch4-window7-224-finetuned-woody
Alex-VisTas
2022-10-14T00:16:45Z
221
1
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-09-27T20:28:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-woody results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.7927272727272727 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-woody This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4349 - Accuracy: 0.7927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.632 | 1.0 | 58 | 0.5883 | 0.6836 | | 0.6067 | 2.0 | 116 | 0.6017 | 0.6848 | | 0.5865 | 3.0 | 174 | 0.5695 | 0.7042 | | 0.553 | 4.0 | 232 | 0.5185 | 0.7515 | | 0.5468 | 5.0 | 290 | 0.5108 | 0.7430 | | 0.5473 | 6.0 | 348 | 0.4882 | 0.7648 | | 0.5381 | 7.0 | 406 | 0.4800 | 0.7588 | | 0.5468 | 8.0 | 464 | 0.5056 | 0.7358 | | 0.5191 | 9.0 | 522 | 0.4784 | 0.7673 | | 0.5318 | 10.0 | 580 | 0.4762 | 0.7636 | | 0.5079 | 11.0 | 638 | 0.4859 | 0.7673 | | 0.5216 | 12.0 | 696 | 0.4691 | 0.7697 | | 0.515 | 13.0 | 754 | 0.4857 | 0.7624 | | 0.5186 | 14.0 | 812 | 0.4685 | 0.7733 | | 0.4748 | 15.0 | 870 | 0.4536 | 0.7818 | | 0.4853 | 16.0 | 928 | 0.4617 | 0.7770 | | 0.4868 | 17.0 | 986 | 0.4622 | 0.7782 | | 0.4572 | 18.0 | 1044 | 0.4583 | 0.7770 | | 0.4679 | 19.0 | 1102 | 0.4590 | 0.7733 | | 0.4508 | 20.0 | 1160 | 0.4576 | 0.7903 | | 0.4663 | 21.0 | 1218 | 0.4542 | 0.7891 | | 0.4533 | 22.0 | 1276 | 0.4428 | 0.7903 | | 0.4892 | 23.0 | 1334 | 0.4372 | 0.7867 | | 0.4704 | 24.0 | 1392 | 0.4414 | 0.7903 | | 0.4304 | 25.0 | 1450 | 0.4430 | 0.7988 | | 0.4411 | 26.0 | 1508 | 0.4348 | 0.7818 | | 0.4604 | 27.0 | 1566 | 0.4387 | 0.7927 | | 0.441 | 28.0 | 1624 | 0.4378 | 0.7964 | | 0.442 | 29.0 | 1682 | 0.4351 | 0.7915 | | 0.4585 | 30.0 | 1740 | 0.4349 | 0.7927 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.0 - Tokenizers 0.13.1
halflings/nateraw_world-happiness_2018_2.csv
halflings
2022-10-13T23:28:57Z
0
0
mlconsole
[ "mlconsole", "tabular-regression", "dataset:nateraw/world-happiness", "license:unknown", "model-index", "region:us" ]
tabular-regression
2022-10-13T23:28:54Z
--- license: unknown inference: false tags: - mlconsole - tabular-regression library_name: mlconsole metrics: - mae - loss datasets: - nateraw/world-happiness model-index: - name: nateraw_world-happiness_2018_2.csv results: - task: type: tabular-regression name: tabular-regression dataset: type: nateraw/world-happiness name: nateraw/world-happiness metrics: - type: mae name: Mean absolute error value: 0.4946480691432953 - type: loss name: Model loss value: 0.4008486866950989 --- # regression model trained on "nateraw/world-happiness" 🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/nateraw_world-happiness_2018_2.csv) in one click. 🧑‍💻 [Train your own model](https://mlconsole.com) on ML Console.
cleanrl/ppo
cleanrl
2022-10-13T20:07:22Z
0
0
null
[ "tensorboard", "CartPole-v1", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "region:us" ]
reinforcement-learning
2022-10-13T18:44:00Z
--- tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - custom-implementation --- # (CleanRL) **PPO** Agent Playing **CartPole-v1** This is a trained model of a PPO agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo.py). # Hyperparameters ```python {'anneal_lr': True, 'batch_size': 512, 'capture_video': True, 'clip_coef': 0.2, 'clip_vloss': True, 'cuda': False, 'ent_coef': 0.01, 'env_id': 'CartPole-v1', 'exp_name': 'ppo', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_repo_id': 'cleanrl/ppo', 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 128, 'norm_adv': True, 'num_envs': 4, 'num_minibatches': 4, 'num_steps': 128, 'save_model': True, 'seed': 1, 'target_kl': None, 'torch_deterministic': True, 'total_timesteps': 500000, 'track': False, 'update_epochs': 4, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
EnsorcelledEther/Grief-Seed
EnsorcelledEther
2022-10-13T19:50:32Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-10-13T19:31:47Z
--- license: mit --- ### Grief Seed on Stable Diffusion This is the `grief seed` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). I guess because they are png you can't see them? Idk, I'll fix it later. The look like grief seeds from Puella Magi Madoka Magica.
johnbradley/TestModel2
johnbradley
2022-10-13T19:45:33Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-10-13T16:42:44Z
--- license: mit --- This repo is just to test download a model (file) from huggingface. The big.dat "model" file is just random data created by running `mkfile`. See https://github.com/johnbradley/hf-docker for how this "model" is added into a docker container.
shensq0814/DIALECT
shensq0814
2022-10-13T19:24:21Z
109
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:declare-lab/cicero", "arxiv:2210.02890", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-05T21:37:06Z
--- license: mit widget: - text: "What is or could be the subsequent event of the target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it " example_title: "Subsequent Event" - text: "What is or could be the cause of the target? <sep>target: But she did and made me disappointed . <sep> context: A: David , why didn't you clean the room ?, <utt>B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt>A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt>B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it " example_title: "Cause" - text: "What is the possible emotional reaction of the listener in response to target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt>B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it " example_title: "Emotional Reaction" datasets: - declare-lab/cicero --- ## Contextualized Commonsense Inference in Dialogues v2 The pretrained checkpoint for the paper [Multiview Contextual Commonsense Inference: A New Dataset and Task](https://arxiv.org/abs/2210.02890). The model is trained based on the [T5-large](https://huggingface.co/t5-large) checkpoint. ![model image](https://drive.google.com/uc?export=download&id=14RIbxgXhREdu5xZiKn5D-UUzaQLDNLqf) ## Datasets The dataset used to pretrain the model can be obtained from the [CICERO repo](https://github.com/declare-lab/CICERO) following instructions. The CICEROv2 consists of annotated commonsense inferences including cause and emotional reaction, etc. The dialogues are from multiple datasets. | Dataset | #Dialogues| #Instances| | -------- | ----- | --------- | | DailyDialog| 1118| 3973| | MuTual| 1011 | 3384| | Dream| 250 | 994| ### Examples Some examples of generated results from the pretrained model (the zero-shot setting). **Subsequent Event** ``` What is or could be the subsequent event of the target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted subsequent event: ``` David's girlfriend apologized to david for her mistake. ``` **Cause** ``` What is or could be the cause of the target? <sep> target: But she did and made me disappointed . <sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted cause: ``` David's girlfriend was not nice to him. ``` **Emotional Reaction** ``` What is the possible emotional reaction of the listener in response to target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted emotional reaction: ``` The listener is hopeful that david will forgive his girlfriend for her mistake. ``` ## BibTeX entry and citation info If you use the model, you can cite: ```bibtex @article{Shen2022MultiviewCC, title={Multiview Contextual Commonsense Inference: A New Dataset and Task}, author={Siqi Shen and Deepanway Ghosal and Navonil Majumder and Henry Lim and Rada Mihalcea and Soujanya Poria}, journal={ArXiv}, year={2022}, volume={abs/2210.02890} } ```
EnsorcelledEther/Kumiko
EnsorcelledEther
2022-10-13T19:21:52Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-10-13T19:05:15Z
--- license: mit --- ### Kumiko on Stable Diffusion This is the `Kumiko` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). I guess because they are png you can't see them? Idk, I'll fix it later. It's Kumiko! It looks like Kumiko. Here is the new concept you will be able to use as a `style`: ![kumiko 0](https://huggingface.co/Vicidi/Kumiko/blob/main/00000-0.png) ![kumiko 1](https://huggingface.co/Vicidi/Kumiko/blob/main/00001-0.png) ![kumiko 2](https://huggingface.co/Vicidi/Kumiko/blob/main/00002-0.png) ![kumiko 3](https://huggingface.co/Vicidi/Kumiko/blob/main/00003-0.png) ![kumiko 4](https://huggingface.co/Vicidi/Kumiko/blob/main/00004-0.png) ![kumiko 5](https://huggingface.co/Vicidi/Kumiko/blob/main/00005-0.png) ![kumiko 6](https://huggingface.co/Vicidi/Kumiko/blob/main/00006-0.png) ![kumiko 7](https://huggingface.co/Vicidi/Kumiko/blob/main/00007-0.png) ![kumiko 8](https://huggingface.co/Vicidi/Kumiko/blob/main/00008-0.png) ![kumiko 9](https://huggingface.co/Vicidi/Kumiko/blob/main/00009-0.png)
MuhammadIqbalBazmi/wav2vec2-base-intent-classification-ori
MuhammadIqbalBazmi
2022-10-13T18:47:25Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-10-11T15:26:10Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-intent-classification-ori results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-intent-classification-ori This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [intent-dataset](https://huggingface.co/datasets/MuhammadIqbalBazmi/intent-dataset) dataset. It achieves the following results on the evaluation set: - Loss: 0.4928 - Accuracy: 0.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1867 | 1.0 | 28 | 2.1745 | 0.2708 | | 2.1177 | 2.0 | 56 | 2.1165 | 0.2708 | | 2.1012 | 3.0 | 84 | 2.0553 | 0.2708 | | 1.9851 | 4.0 | 112 | 1.9551 | 0.375 | | 1.9092 | 5.0 | 140 | 1.9765 | 0.2917 | | 1.6848 | 6.0 | 168 | 1.8461 | 0.2917 | | 1.6576 | 7.0 | 196 | 1.5223 | 0.5 | | 1.4492 | 8.0 | 224 | 1.4500 | 0.4792 | | 1.2193 | 9.0 | 252 | 1.5349 | 0.4792 | | 1.1149 | 10.0 | 280 | 1.2159 | 0.5833 | | 1.0615 | 11.0 | 308 | 1.1469 | 0.6875 | | 1.0584 | 12.0 | 336 | 1.2778 | 0.6042 | | 0.8237 | 13.0 | 364 | 1.1774 | 0.5625 | | 0.6699 | 14.0 | 392 | 0.9661 | 0.6875 | | 0.7414 | 15.0 | 420 | 1.2787 | 0.5208 | | 0.5324 | 16.0 | 448 | 0.8592 | 0.7292 | | 0.3753 | 17.0 | 476 | 0.6860 | 0.7917 | | 0.3274 | 18.0 | 504 | 0.6210 | 0.8333 | | 0.3667 | 19.0 | 532 | 0.7310 | 0.75 | | 0.2347 | 20.0 | 560 | 0.6801 | 0.7292 | | 0.2036 | 21.0 | 588 | 0.9876 | 0.6875 | | 0.1711 | 22.0 | 616 | 0.6323 | 0.7917 | | 0.205 | 23.0 | 644 | 0.4414 | 0.8958 | | 0.0892 | 24.0 | 672 | 0.4253 | 0.8958 | | 0.0777 | 25.0 | 700 | 0.4703 | 0.8958 | | 0.0717 | 26.0 | 728 | 0.4883 | 0.8958 | | 0.041 | 27.0 | 756 | 0.6224 | 0.8542 | | 0.0493 | 28.0 | 784 | 0.5839 | 0.875 | | 0.0405 | 29.0 | 812 | 0.6454 | 0.8542 | | 0.04 | 30.0 | 840 | 0.6102 | 0.875 | | 0.0333 | 31.0 | 868 | 0.6080 | 0.875 | | 0.0303 | 32.0 | 896 | 0.5539 | 0.875 | | 0.025 | 33.0 | 924 | 0.5799 | 0.8958 | | 0.0246 | 34.0 | 952 | 0.5766 | 0.8958 | | 0.0209 | 35.0 | 980 | 0.5700 | 0.8958 | | 0.0225 | 36.0 | 1008 | 0.5709 | 0.8958 | | 0.0225 | 37.0 | 1036 | 0.5582 | 0.8958 | | 0.0217 | 38.0 | 1064 | 0.5258 | 0.875 | | 0.0207 | 39.0 | 1092 | 0.5058 | 0.8958 | | 0.0234 | 40.0 | 1120 | 0.4981 | 0.8958 | | 0.021 | 41.0 | 1148 | 0.4928 | 0.9167 | | 0.0224 | 42.0 | 1176 | 0.4962 | 0.9167 | | 0.0212 | 43.0 | 1204 | 0.5329 | 0.8958 | | 0.0208 | 44.0 | 1232 | 0.5727 | 0.8958 | | 0.0206 | 45.0 | 1260 | 0.5733 | 0.8958 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Simon17/Klassifizierung-Gewerke
Simon17
2022-10-13T18:31:16Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-29T10:57:23Z
--- license: mit tags: - generated_from_trainer metrics: - f1 widget: - text: "11025RLT601PU01SW01" - text: "11004KAE906KR1BM04" - text: "11004HZG201PU1SM02" - text: "12064ISP005IS01SW09" - text: "Störung HZG Pumpe" model-index: - name: Klassifizierung-Gewerke results: [] --- # Klassifizierung-Gewerke This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0398 - F1: 0.9931 ## Model description The model is based on a BACnet data set and makes it possible to classify them according to trades. ## Intended uses & limitations More information needed ## Training and evaluation data The model is based on a German-based language dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1473 | 1.0 | 726 | 0.0952 | 0.9822 | | 0.0252 | 2.0 | 1452 | 0.0488 | 0.9918 | | 0.028 | 3.0 | 2178 | 0.0398 | 0.9931 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Simon17/Klassifizierung-RLT
Simon17
2022-10-13T18:02:46Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-29T10:32:54Z
--- license: mit tags: - generated_from_trainer metrics: - f1 widget: - text: "11004RLT790AG01BS01" - text: "11004RLT615AB01BM06" - text: "12064RLT606KL02RM01" - text: "11004RLT722VA01AS01" - text: "Abl. Anl. 15 Aufzugm." model-index: - name: Klassifizierung-RLT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Klassifizierung-RLT This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0616 - F1: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.828 | 1.0 | 292 | 0.2156 | 0.9447 | | 0.1491 | 2.0 | 584 | 0.0832 | 0.9805 | | 0.0695 | 3.0 | 876 | 0.0616 | 0.9852 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
sd-concepts-library/command-and-conquer-remastered-cameos
sd-concepts-library
2022-10-13T18:01:25Z
0
3
null
[ "license:mit", "region:us" ]
null
2022-10-13T18:00:32Z
--- license: mit --- ### command_and_conquer_remastered_cameos on Stable Diffusion This is the `<command_and_conquer_remastered_cameos>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<command_and_conquer_remastered_cameos> 0](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/17.png) ![<command_and_conquer_remastered_cameos> 1](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/28.png) ![<command_and_conquer_remastered_cameos> 2](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/32.png) ![<command_and_conquer_remastered_cameos> 3](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/6.png) ![<command_and_conquer_remastered_cameos> 4](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/20.png) ![<command_and_conquer_remastered_cameos> 5](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/14.png) ![<command_and_conquer_remastered_cameos> 6](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/7.png) ![<command_and_conquer_remastered_cameos> 7](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/36.png) ![<command_and_conquer_remastered_cameos> 8](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/2.png) ![<command_and_conquer_remastered_cameos> 9](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/33.png) ![<command_and_conquer_remastered_cameos> 10](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/22.png) ![<command_and_conquer_remastered_cameos> 11](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/40.png) ![<command_and_conquer_remastered_cameos> 12](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/30.png) ![<command_and_conquer_remastered_cameos> 13](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/3.png) ![<command_and_conquer_remastered_cameos> 14](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/5.png) ![<command_and_conquer_remastered_cameos> 15](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/21.png) ![<command_and_conquer_remastered_cameos> 16](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/9.png) ![<command_and_conquer_remastered_cameos> 17](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/27.png) ![<command_and_conquer_remastered_cameos> 18](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/34.png) ![<command_and_conquer_remastered_cameos> 19](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/0.png) ![<command_and_conquer_remastered_cameos> 20](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/12.png) ![<command_and_conquer_remastered_cameos> 21](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/10.png) ![<command_and_conquer_remastered_cameos> 22](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/1.png) ![<command_and_conquer_remastered_cameos> 23](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/24.png) ![<command_and_conquer_remastered_cameos> 24](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/38.png) ![<command_and_conquer_remastered_cameos> 25](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/35.png) ![<command_and_conquer_remastered_cameos> 26](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/18.png) ![<command_and_conquer_remastered_cameos> 27](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/37.png) ![<command_and_conquer_remastered_cameos> 28](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/8.png) ![<command_and_conquer_remastered_cameos> 29](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/16.png) ![<command_and_conquer_remastered_cameos> 30](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/23.png) ![<command_and_conquer_remastered_cameos> 31](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/11.png) ![<command_and_conquer_remastered_cameos> 32](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/31.png) ![<command_and_conquer_remastered_cameos> 33](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/19.png) ![<command_and_conquer_remastered_cameos> 34](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/41.png) ![<command_and_conquer_remastered_cameos> 35](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/4.png) ![<command_and_conquer_remastered_cameos> 36](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/15.png) ![<command_and_conquer_remastered_cameos> 37](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/26.png) ![<command_and_conquer_remastered_cameos> 38](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/29.png) ![<command_and_conquer_remastered_cameos> 39](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/25.png) ![<command_and_conquer_remastered_cameos> 40](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/13.png) ![<command_and_conquer_remastered_cameos> 41](https://huggingface.co/sd-concepts-library/command-and-conquer-remastered-cameos/resolve/main/concept_images/39.png)
Digitalwitness/distilgpt2-finetuned-shakespeare
Digitalwitness
2022-10-13T17:49:29Z
59
0
transformers
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-12T12:45:13Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Digitalwitness/distilgpt2-finetuned-shakespeare results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Digitalwitness/distilgpt2-finetuned-shakespeare This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0603 - Validation Loss: 2.2069 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4056 | 3.1490 | 0 | | 3.1359 | 2.9958 | 1 | | 2.9970 | 2.9052 | 2 | | 2.9003 | 2.8363 | 3 | | 2.8192 | 2.7759 | 4 | | 2.7524 | 2.7306 | 5 | | 2.6881 | 2.6775 | 6 | | 2.6294 | 2.6329 | 7 | | 2.5716 | 2.5949 | 8 | | 2.5213 | 2.5512 | 9 | | 2.4652 | 2.5107 | 10 | | 2.4156 | 2.4803 | 11 | | 2.3677 | 2.4329 | 12 | | 2.3163 | 2.3989 | 13 | | 2.2735 | 2.3695 | 14 | | 2.2311 | 2.3317 | 15 | | 2.1842 | 2.2924 | 16 | | 2.1386 | 2.2688 | 17 | | 2.1015 | 2.2297 | 18 | | 2.0603 | 2.2069 | 19 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.6.0 - Tokenizers 0.13.1
ImageIN/convnext-tiny-224_finetuned_on_unlabelled_IA_with_snorkel_labels
ImageIN
2022-10-13T17:43:17Z
216
0
transformers
[ "transformers", "pytorch", "convnext", "image-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-13T12:46:30Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: convnext-tiny-224_finetuned_on_unlabelled_IA_with_snorkel_labels results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224_finetuned_on_unlabelled_IA_with_snorkel_labels This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4381 - Precision: 0.8239 - Recall: 0.7919 - F1: 0.8058 - Accuracy: 0.8629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 112 | 0.5589 | 0.7547 | 0.5380 | 0.5097 | 0.7679 | | No log | 2.0 | 224 | 0.5578 | 0.7691 | 0.5387 | 0.5103 | 0.7690 | | No log | 3.0 | 336 | 0.4812 | 0.8513 | 0.7371 | 0.7709 | 0.8555 | | No log | 4.0 | 448 | 0.4387 | 0.8734 | 0.6539 | 0.6835 | 0.8259 | | 0.482 | 5.0 | 560 | 0.4427 | 0.8322 | 0.6250 | 0.6449 | 0.8085 | | 0.482 | 6.0 | 672 | 0.6234 | 0.8219 | 0.5702 | 0.5635 | 0.7848 | | 0.482 | 7.0 | 784 | 0.6187 | 0.8791 | 0.6070 | 0.6196 | 0.8054 | | 0.482 | 8.0 | 896 | 0.3953 | 0.8683 | 0.7134 | 0.7507 | 0.8502 | | 0.3656 | 9.0 | 1008 | 0.4381 | 0.8239 | 0.7919 | 0.8058 | 0.8629 | | 0.3656 | 10.0 | 1120 | 0.5346 | 0.7794 | 0.7900 | 0.7844 | 0.8370 | | 0.3656 | 11.0 | 1232 | 0.3685 | 0.8678 | 0.7600 | 0.7943 | 0.8681 | | 0.3656 | 12.0 | 1344 | 0.6900 | 0.6244 | 0.6667 | 0.6099 | 0.6435 | | 0.3656 | 13.0 | 1456 | 0.6097 | 0.6832 | 0.7149 | 0.6931 | 0.7511 | | 0.2987 | 14.0 | 1568 | 0.5435 | 0.8746 | 0.6754 | 0.7096 | 0.8354 | | 0.2987 | 15.0 | 1680 | 0.5525 | 0.7277 | 0.7690 | 0.7411 | 0.7890 | | 0.2987 | 16.0 | 1792 | 0.5003 | 0.8086 | 0.7694 | 0.7856 | 0.8507 | | 0.2987 | 17.0 | 1904 | 0.8172 | 0.6183 | 0.6576 | 0.6074 | 0.6450 | | 0.2598 | 18.0 | 2016 | 0.6102 | 0.6977 | 0.7489 | 0.7070 | 0.75 | | 0.2598 | 19.0 | 2128 | 0.4260 | 0.8523 | 0.7497 | 0.7822 | 0.8602 | | 0.2598 | 20.0 | 2240 | 0.5503 | 0.8276 | 0.6770 | 0.7079 | 0.8281 | | 0.2598 | 21.0 | 2352 | 0.4574 | 0.7994 | 0.7785 | 0.7879 | 0.8481 | | 0.2598 | 22.0 | 2464 | 0.6307 | 0.8620 | 0.6353 | 0.6592 | 0.8165 | | 0.2111 | 23.0 | 2576 | 0.4605 | 0.8196 | 0.7697 | 0.7894 | 0.8555 | | 0.2111 | 24.0 | 2688 | 0.5290 | 0.8152 | 0.7320 | 0.7592 | 0.8434 | | 0.2111 | 25.0 | 2800 | 0.4754 | 0.8755 | 0.7216 | 0.7599 | 0.8550 | | 0.2111 | 26.0 | 2912 | 0.5161 | 0.8428 | 0.7436 | 0.7750 | 0.8555 | | 0.1638 | 27.0 | 3024 | 0.5753 | 0.7358 | 0.7278 | 0.7316 | 0.8043 | | 0.1638 | 28.0 | 3136 | 0.6403 | 0.8468 | 0.7016 | 0.7360 | 0.8412 | | 0.1638 | 29.0 | 3248 | 0.5418 | 0.7912 | 0.7473 | 0.7647 | 0.8381 | | 0.1638 | 30.0 | 3360 | 0.5651 | 0.8240 | 0.7315 | 0.7607 | 0.8460 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.0 - Tokenizers 0.13.1
shuojiang/a2c-AntBulletEnv-v0
shuojiang
2022-10-13T16:45:34Z
2
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-13T16:44:26Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1841.53 +/- 199.08 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CarperAI/carptriever-1
CarperAI
2022-10-13T16:37:45Z
170
12
transformers
[ "transformers", "pytorch", "bert", "en", "dataset:pile", "arxiv:2112.09118", "arxiv:1909.09436", "arxiv:2201.10005", "arxiv:2101.00027", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-09-06T16:44:44Z
--- language: - en license: mit datasets: - pile metrics: - nDCG@10 - MRR --- # Carptriever-1 ## Model description Carptriever-1 is a `bert-large-uncased` retrieval model trained with contrastive learning via a momentum contrastive (MoCo) mechanism following the work of G. Izacard et al. in ["Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning"](https://arxiv.org/abs/2112.09118). ## How to use ```python from transformers import AutoTokenizer, AutoModel def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings # Remove pooling layer model = AutoModel.from_pretrained("CarperAI/carptriever-1", add_pooling_layer=False) tokenizer = AutoTokenizer.from_pretrained("CarperAI/carptriever-1") sentences = [ "Where was Marie Curie born?", # Query "Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.", "Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace." ] # Apply tokenizer inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Encode sentences outputs = model(**inputs) embeddings = mean_pooling(outputs[0], inputs['attention_mask']) # Compute dot-product scores between the query and sentence embeddings query_embedding, sentence_embeddings = embeddings[0], embeddings[1:] scores = (query_embedding @ sentence_embeddings.transpose(0, 1)).cpu().tolist() sentence_score_pairs = sorted(zip(sentences[1:], scores), reverse=True) print(f"Query: {sentences[0]}") for sentence, score in sentence_score_pairs: print(f"\nSentence: {sentence}\nScore: {score:.4f}") ``` ## Training data Carptriever-1 is pre-trained on a de-duplicated subset of [The Pile](https://pile.eleuther.ai/), a large and diverse dataset created by EleutherAI for language model training. This subset was created through a [Minhash LSH](http://ekzhu.com/datasketch/lsh.html) process using a threshold of `0.87`. ## Training procedure The model was trained on 32 40GB A100 for approximately 100 hours with the following configurations: - Base model: - `bert-large-uncased` - Optimizer settings: - `optimizer = AdamW` - `lr = 1e-5` - `schedule = linear` - `warmup = 20,000 steps` - `batch size = 2048` - `training steps = 150,000` - MoCo settings: - `queue size = 8192` - `momentum = 0.999` - `temperature = 0.05` ## Evaluation results #### [BEIR: Benchmarking IR](https://github.com/beir-cellar/beir) We report the following BEIR scores as measured in normalized discounted cumulative gain (nDCG@10): | Model | Avg | MSMARCO | TREC-Covid | NFCorpus | NaturalQuestions | HotpotQA | FiQA | ArguAna | Tóuche-2020 | Quora | CQAdupstack | DBPedia | Scidocs | Fever | Climate-fever | Scifact | |---------------|-------|---------|------------|----------|------------------|----------|------|---------|-------------|-------|-------------|---------|---------|-------|---------------|----------| | Contriever* | 35.97 | 20.6 | 27.4 | 31.7 | 25.4 | 48.1 | 24.5 | 37.9 | 19.3 | 83.5 | 28.40 | 29.2 | 14.9 | 68.20 | 15.5 | 64.9 | | Carptriever-1 | 34.54 | 18.83 | **52.2** | 28.5 | 21.1 | 39.4 | 23.2 | 31.7 | 15.2 | 81.3 | 26.88 | 25.4 | 14.2 | 57.36 | **17.9** | 64.9 | \* Results are taken from the Contriever [GitHub repository](https://github.com/facebookresearch/contriever). Note that degradation in performance, relative to the Contriever model, was expected given the much broader diversity of our training dataset. We plan on addressing this in future updates with architectural improvements and view Carptriever-1 as our first iteration in the exploratory phase towards better language-embedding models. #### [CodeSearchNet Challenge Evaluating the State of Semantic Code Search](https://arxiv.org/pdf/1909.09436.pdf) We provide results on the CodeSearchNet benchmark, measured in Mean Reciprocal Rank (MRR), following the code search procedure outlined in Section 3.3 of Neelakantan et al.'s ["Text and Code Embeddings by Contrastive Pre-Training"](https://arxiv.org/pdf/2201.10005.pdf). `Candidate Size = 1,000` | Model | Avg | Python | Go | Ruby | PHP | Java | JS | |-----------------|-------|--------|-------|-------|-------|-------|-------| | Carptriever-1 | 60.24 | 65.85 | 63.29 | 62.1 | 59.1 | 55.52 | 55.55 | | Contriever | 49.39 | 54.81 | 58.9 | 55.19 | 38.46 | 44.89 | 44.09 | `Candidate Size = 10,000` | Model. | Avg | Python | Go | Ruby | PHP | Java | JS | |-----------------|-------|--------|-------|-------|-------|-------|-------| | Carptriever-1 | 48.59 | 55.98 | 43.18 | 56.06 | 45.62 | 46.04 | 44.66 | | Contriever | 37 | 45.43 | 36.08 | 48.07 | 25.59 | 32.89 | 31.44 | ## Acknowledgements This work would not have been possible without the compute support of [Stability AI](https://stability.ai/). Thank you to Louis Castricato for research guidance and Reshinth Adithyan for creating the CodeSearchNet evaluation script. ## Citations ```bibtex @misc{izacard2021contriever, title={Unsupervised Dense Information Retrieval with Contrastive Learning}, author={Gautier Izacard and Mathilde Caron and Lucas Hosseini and Sebastian Riedel and Piotr Bojanowski and Armand Joulin and Edouard Grave}, year={2021}, url = {https://arxiv.org/abs/2112.09118}, doi = {10.48550/ARXIV.2112.09118}, } ``` ```bibtex @article{pile, title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
shuojiang/Reinforce-Pixelcopter-PLE-v0
shuojiang
2022-10-13T16:30:23Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-10-13T16:30:16Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 14.40 +/- 14.31 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
caijiahao/ygvxc
caijiahao
2022-10-13T15:21:54Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-10-13T15:21:54Z
--- license: bigscience-bloom-rail-1.0 ---
EdBianchi/T5-finetuned-abstracts
EdBianchi
2022-10-13T14:43:00Z
72
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-11T16:55:09Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: EdBianchi/T5-finetuned-abstracts results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # EdBianchi/T5-finetuned-abstracts This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9469 - Train Lr: 0.0004 - Validation Loss: 1.8462 - Validation Lr: 0.0002 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.00015378147, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Lr | Validation Loss | Validation Lr | Epoch | |:----------:|:--------:|:---------------:|:-------------:|:-----:| | 2.2534 | 0.0005 | 1.9839 | 0.0007 | 0 | | 1.9469 | 0.0004 | 1.8462 | 0.0002 | 1 | ### Framework versions - Transformers 4.21.3 - TensorFlow 2.10.0 - Datasets 2.4.0 - Tokenizers 0.12.1
blmnk/distilbert-base-cased-finetuned-news
blmnk
2022-10-13T14:34:20Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-13T06:06:08Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-cased-finetuned-news results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-finetuned-news This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu116 - Datasets 2.5.2 - Tokenizers 0.12.1
north/t5_xl_NCC_modern
north
2022-10-13T14:33:33Z
8
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-21T11:46:48Z
--- language: - no - nn - sv - dk - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: other --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| |North-T5&#8209;NCC&#8209;modern|[🤗](https://huggingface.co/north/t5_small_NCC_modern)|[🤗](https://huggingface.co/north/t5_base_NCC_modern)|[🤗](https://huggingface.co/north/t5_large_NCC_modern)|✔|| |North-T5&#8209;NCC&#8209;modern&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern_lm)|| |North-T5&#8209;NCC&#8209;scand|[🤗](https://huggingface.co/north/t5_small_NCC_scand)|[🤗](https://huggingface.co/north/t5_base_NCC_scand)|[🤗](https://huggingface.co/north/t5_large_NCC_scand)|[🤗](https://huggingface.co/north/t5_xl_NCC_scand)|| |North-T5&#8209;scand|[🤗](https://huggingface.co/north/t5_small_scand)|[🤗](https://huggingface.co/north/t5_base_scand)|[🤗](https://huggingface.co/north/t5_large_scand)|| |North-byT5&#8209;NCC|[🤗](https://huggingface.co/north/byt5_small_NCC)|[🤗](https://huggingface.co/north/byt5_base_NCC)|[🤗](https://huggingface.co/north/byt5_large_NCC)|| |North-T5&#8209;scand3M|| [🤗](https://huggingface.co/north/t5_base_scand3M)|[🤗](https://huggingface.co/north/t5_large_scand3M)|[🤗](https://huggingface.co/north/t5_xl_scand3M)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xl/norwegian_NCC_plus_English_pluss200k_balanced_bokmaal_nynorsk_t5x_xl/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| |**North&#8209;T5&#8209;NCC&#8209;modern**| The model is pretrained for an additional 200k steps on a blanaced Bokmål and Nynorsk corpus. While this was originally done for doing translation between Bokmål and Nynorsk, it might also give improved results on tasks where you know that the input/output is modern "standard" text. A significant part of the training corpus is newspapers and reports.| |**North&#8209;T5&#8209;NCC&#8209;modern&#8209;lm**| Trained as above but with an additional 100k "language model"-pretraining.| |**North&#8209;T5&#8209;NCC&#8209;scand**|The model is pretrained for an additional 200k steps on a Scandinavian corpus (Bokmål, Nynorsk, Danish, Swedish and Icelandic (+ a tiny bit Faeroyish)). The model was trained for increasing the understanding of what effect such training has on various languages.| |**North&#8209;T5&#8209;scand**|Pretrained for 1,700,000 steps starting with the mT5 checkpoing. The purpose of the mode is studying the difference of different training regimes for Scandinavian language model.| |**North&#8209;byT5&#8209;base**| This is a vocabulary free version of T5. It is trained exactly like North-T5, but instead of the 250,112 vocabulary, this model operates directly on the raw text. The model architecture might be of particulary interest for tasks involving for instance spelling correction, OCR-cleaning, handwriting recognition etc. However, it will - by design - have amuch shorter maximum sequence length.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
north/t5_base_NCC_modern
north
2022-10-13T14:32:34Z
7
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-21T11:45:36Z
--- language: - no - nn - sv - dk - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: other --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| |North-T5&#8209;NCC&#8209;modern|[🤗](https://huggingface.co/north/t5_small_NCC_modern)|✔|[🤗](https://huggingface.co/north/t5_large_NCC_modern)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern)|| |North-T5&#8209;NCC&#8209;modern&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern_lm)|| |North-T5&#8209;NCC&#8209;scand|[🤗](https://huggingface.co/north/t5_small_NCC_scand)|[🤗](https://huggingface.co/north/t5_base_NCC_scand)|[🤗](https://huggingface.co/north/t5_large_NCC_scand)|[🤗](https://huggingface.co/north/t5_xl_NCC_scand)|| |North-T5&#8209;scand|[🤗](https://huggingface.co/north/t5_small_scand)|[🤗](https://huggingface.co/north/t5_base_scand)|[🤗](https://huggingface.co/north/t5_large_scand)|| |North-byT5&#8209;NCC|[🤗](https://huggingface.co/north/byt5_small_NCC)|[🤗](https://huggingface.co/north/byt5_base_NCC)|[🤗](https://huggingface.co/north/byt5_large_NCC)|| |North-T5&#8209;scand3M|| [🤗](https://huggingface.co/north/t5_base_scand3M)|[🤗](https://huggingface.co/north/t5_large_scand3M)|[🤗](https://huggingface.co/north/t5_xl_scand3M)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/norwegian_NCC_plus_English_pluss200k_balanced_bokmaal_nynorsk_t5x_base/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| |**North&#8209;T5&#8209;NCC&#8209;modern**| The model is pretrained for an additional 200k steps on a blanaced Bokmål and Nynorsk corpus. While this was originally done for doing translation between Bokmål and Nynorsk, it might also give improved results on tasks where you know that the input/output is modern "standard" text. A significant part of the training corpus is newspapers and reports.| |**North&#8209;T5&#8209;NCC&#8209;modern&#8209;lm**| Trained as above but with an additional 100k "language model"-pretraining.| |**North&#8209;T5&#8209;NCC&#8209;scand**|The model is pretrained for an additional 200k steps on a Scandinavian corpus (Bokmål, Nynorsk, Danish, Swedish and Icelandic (+ a tiny bit Faeroyish)). The model was trained for increasing the understanding of what effect such training has on various languages.| |**North&#8209;T5&#8209;scand**|Pretrained for 1,700,000 steps starting with the mT5 checkpoing. The purpose of the mode is studying the difference of different training regimes for Scandinavian language model.| |**North&#8209;byT5&#8209;base**| This is a vocabulary free version of T5. It is trained exactly like North-T5, but instead of the 250,112 vocabulary, this model operates directly on the raw text. The model architecture might be of particulary interest for tasks involving for instance spelling correction, OCR-cleaning, handwriting recognition etc. However, it will - by design - have amuch shorter maximum sequence length.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
north/t5_small_NCC_modern
north
2022-10-13T14:32:03Z
109
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-21T11:44:53Z
--- language: - no - nn - sv - dk - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: other --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| |North-T5&#8209;NCC&#8209;modern|✔|[🤗](https://huggingface.co/north/t5_base_NCC_modern)|[🤗](https://huggingface.co/north/t5_large_NCC_modern)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern)|| |North-T5&#8209;NCC&#8209;modern&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern_lm)|| |North-T5&#8209;NCC&#8209;scand|[🤗](https://huggingface.co/north/t5_small_NCC_scand)|[🤗](https://huggingface.co/north/t5_base_NCC_scand)|[🤗](https://huggingface.co/north/t5_large_NCC_scand)|[🤗](https://huggingface.co/north/t5_xl_NCC_scand)|| |North-T5&#8209;scand|[🤗](https://huggingface.co/north/t5_small_scand)|[🤗](https://huggingface.co/north/t5_base_scand)|[🤗](https://huggingface.co/north/t5_large_scand)|| |North-byT5&#8209;NCC|[🤗](https://huggingface.co/north/byt5_small_NCC)|[🤗](https://huggingface.co/north/byt5_base_NCC)|[🤗](https://huggingface.co/north/byt5_large_NCC)|| |North-T5&#8209;scand3M|| [🤗](https://huggingface.co/north/t5_base_scand3M)|[🤗](https://huggingface.co/north/t5_large_scand3M)|[🤗](https://huggingface.co/north/t5_xl_scand3M)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/small/norwegian_NCC_plus_English_pluss200k_balanced_bokmaal_nynorsk_t5x_small/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| |**North&#8209;T5&#8209;NCC&#8209;modern**| The model is pretrained for an additional 200k steps on a blanaced Bokmål and Nynorsk corpus. While this was originally done for doing translation between Bokmål and Nynorsk, it might also give improved results on tasks where you know that the input/output is modern "standard" text. A significant part of the training corpus is newspapers and reports.| |**North&#8209;T5&#8209;NCC&#8209;modern&#8209;lm**| Trained as above but with an additional 100k "language model"-pretraining.| |**North&#8209;T5&#8209;NCC&#8209;scand**|The model is pretrained for an additional 200k steps on a Scandinavian corpus (Bokmål, Nynorsk, Danish, Swedish and Icelandic (+ a tiny bit Faeroyish)). The model was trained for increasing the understanding of what effect such training has on various languages.| |**North&#8209;T5&#8209;scand**|Pretrained for 1,700,000 steps starting with the mT5 checkpoing. The purpose of the mode is studying the difference of different training regimes for Scandinavian language model.| |**North&#8209;byT5&#8209;base**| This is a vocabulary free version of T5. It is trained exactly like North-T5, but instead of the 250,112 vocabulary, this model operates directly on the raw text. The model architecture might be of particulary interest for tasks involving for instance spelling correction, OCR-cleaning, handwriting recognition etc. However, it will - by design - have amuch shorter maximum sequence length.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
north/t5_xxl_NCC_lm
north
2022-10-13T13:55:09Z
6
1
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-21T11:47:06Z
--- language: - no - nn - sv - dk - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: apache-2.0 --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|✔|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xxl/norwegian_NCC_plus_English_pluss100k_lm_t5x_xxl/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
north/t5_large_NCC
north
2022-10-13T13:54:32Z
16
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-21T11:46:30Z
--- language: - no - nn - sv - dk - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: apache-2.0 --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|✔|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/large/norwegian_NCC_plus_English_t5x_large/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
north/t5_base_NCC
north
2022-10-13T13:53:50Z
5
5
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-21T11:45:48Z
--- language: - no - nn - sv - dk - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: apache-2.0 --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|✔|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/norwegian_NCC_plus_English_t5x_base/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
north/t5_base_NCC_lm
north
2022-10-13T13:53:23Z
9
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-21T11:45:24Z
--- language: - no - nn - sv - dk - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: apache-2.0 --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|✔|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/norwegian_NCC_plus_English_pluss100k_lm_t5x_base/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
huggingtweets/beeple-farokh-punk6529
huggingtweets
2022-10-13T13:06:05Z
130
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-13T13:04:56Z
--- language: en thumbnail: http://www.huggingtweets.com/beeple-farokh-punk6529/1665666360072/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/264316321/beeple_headshot_beat_up_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1440017111531855879/A4p6F07H_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1565117155200438273/rJKca5g1_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">beeple & 6529 & Farokh | OpenSea Intern 👨🏻‍💼</div> <div style="text-align: center; font-size: 14px;">@beeple-farokh-punk6529</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from beeple & 6529 & Farokh | OpenSea Intern 👨🏻‍💼. | Data | beeple | 6529 | Farokh | OpenSea Intern 👨🏻‍💼 | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3250 | 3245 | | Retweets | 76 | 1047 | 266 | | Short tweets | 1273 | 452 | 902 | | Tweets kept | 1901 | 1751 | 2077 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3o60flk9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beeple-farokh-punk6529's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1skvil1b) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1skvil1b/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/beeple-farokh-punk6529') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/boredapeyc-garyvee-opensea
huggingtweets
2022-10-13T12:52:04Z
130
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-13T12:48:37Z
--- language: en thumbnail: http://www.huggingtweets.com/boredapeyc-garyvee-opensea/1665665519153/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1493524673962852353/qRxbC9Xq_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1544105652330631168/ZuvjfGkT_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1446569222352654344/Uc-tml-6_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Gary Vaynerchuk & OpenSea & Bored Ape Yacht Club</div> <div style="text-align: center; font-size: 14px;">@boredapeyc-garyvee-opensea</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Gary Vaynerchuk & OpenSea & Bored Ape Yacht Club. | Data | Gary Vaynerchuk | OpenSea | Bored Ape Yacht Club | | --- | --- | --- | --- | | Tweets downloaded | 3249 | 3239 | 3243 | | Retweets | 723 | 1428 | 3014 | | Short tweets | 838 | 410 | 11 | | Tweets kept | 1688 | 1401 | 218 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8ylc2l06/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boredapeyc-garyvee-opensea's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t159hph) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t159hph/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/boredapeyc-garyvee-opensea') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ddaow/distilbert-base-uncased-finetuned-squad
Ddaow
2022-10-13T10:31:10Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-09-15T01:28:33Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Ddaow/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Ddaow/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9692 - Train End Logits Accuracy: 0.7314 - Train Start Logits Accuracy: 0.6923 - Validation Loss: 1.1071 - Validation End Logits Accuracy: 0.7008 - Validation Start Logits Accuracy: 0.6691 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5125 | 0.6064 | 0.5677 | 1.1969 | 0.6799 | 0.6471 | 0 | | 0.9692 | 0.7314 | 0.6923 | 1.1071 | 0.7008 | 0.6691 | 1 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.5.2 - Tokenizers 0.13.1
jonas/bert-base-uncased-finetuned-sdg
jonas
2022-10-13T09:32:35Z
110
3
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-13T09:18:20Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-sdg results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-sdg This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OSDG dataset. It achieves the following results on the evaluation set: - Loss: 0.3094 - Acc: 0.9195 ## Model description Classifies text to the first 16 SDGs! ## Intended uses & limitations Assess policy documents, classify text to SDGs, etc. ## Training and evaluation data OSDG data. Updated version from October. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Acc | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3768 | 1.0 | 269 | 0.3758 | 0.8933 | | 0.2261 | 2.0 | 538 | 0.3088 | 0.9095 | | 0.1038 | 3.0 | 807 | 0.3094 | 0.9195 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.0a0+8a1a93a - Datasets 2.5.2 - Tokenizers 0.13.1
classla/wav2vec2-large-slavic-voxpopuli-v2_hr_SER
classla
2022-10-13T09:15:43Z
262
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio", "audio-classification", "speech", "hr", "dataset:CrESv2.1", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-10-06T13:43:47Z
--- language: hr datasets: - CrESv2.1 tags: - audio - audio-classification - speech license: cc-by-nc-sa-4.0 --- # classla/wav2vec2-large-slavic-voxpopuli-v2_hr_SER This model for Croatian SER (speech emotion recognition) is based on the `facebook/wav2vec2-large-slavic-voxpopuli-v2` and was fine-tuned on the CrES 2.1 dataset (Croatian Emotional Speech corpus). If you use this model, please cite the following paper describing the dataset: ```latex @inproceedings{Dropuljić_Chmura_Kolak_Petrinović_2011, title={Emotional speech corpus of Croatian language}, ISSN={1845-5921}, booktitle={2011 7th International Symposium on Image and Signal Processing and Analysis (ISPA)}, author={Dropuljić, Branimir and Chmura, Miłosz Thomasz and Kolak, Antonio and Petrinović, Davor}, year={2011}, month={Sep}, pages={95–100} } ``` ## Metrics Evaluation is performed on the dev and test portions of the CrES 2.1 dataset. The splitting was performed anew, stratified on emotion and with no leakage (i.e. no speaker is present in more than one split). | accuracy | macro F1 | split | |----------|----------|-------| | 0.6796 | 0.6461 | test | | 0.7277 | 0.7232 | dev | Confusion matrix on test: ![](007_cm_test.jpg) ## Training hyperparameters In fine-tuning, the following arguments were used: | arg | value | |-------------------------------|-------| | `per_device_train_batch_size` | 2 | | `per_device_eval_batch_size` | 2 | | `gradient_accumulation_steps` | 2 | | `num_train_epochs` | 20 | | `learning_rate` | 1e-4 |
EkiShC/finetuning-sentiment-model
EkiShC
2022-10-13T08:55:31Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-13T07:05:33Z
--- tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
microsoft/beit-base-finetuned-ade-640-640
microsoft
2022-10-13T07:01:48Z
4,195
11
transformers
[ "transformers", "pytorch", "beit", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2106.08254", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-segmentation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # BEiT (base-sized model, fine-tuned on ADE20k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes. ## Intended uses & limitations You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model for semantic segmentation: ```python from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation from datasets import load_dataset from PIL import Image # load ADE20k image ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test") image = Image.open(ds[0]['file']) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-finetuned-ade-640-640') model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-base-finetuned-ade-640-640') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # logits are of shape (batch_size, num_labels, height/4, width/4) logits = outputs.logits ``` Currently, both the feature extractor and model support PyTorch. ## Training data This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
sxxyxn/kogpt_reduced_vocab
sxxyxn
2022-10-13T06:56:45Z
7
1
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "KakaoBrain", "KoGPT", "GPT", "GPT3", "ko", "arxiv:2104.09864", "arxiv:2109.04650", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-13T02:04:31Z
--- language: ko tags: - KakaoBrain - KoGPT - GPT - GPT3 license: cc-by-nc-4.0 --- # KoGPT KakaoBrain's Pre-Trained Language Models. * KoGPT (Korean Generative Pre-trained Transformer) * [https://github.com/kakaobrain/kogpt](https://github.com/kakaobrain/kogpt) * [https://huggingface.co/kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) ## Model Descriptions ### KoGPT6B-ryan1.5b * [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b) * [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b-float16\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b-float16) | Hyperparameter | Value | |:---------------------|--------------:| | \\(n_{parameters}\\) | 6,166,502,400 | | \\(n_{layers}\\) | 28 | | \\(d_{model}\\) | 4,096 | | \\(d_{ff}\\) | 16,384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2,048 | | \\(n_{vocab}\\) | 64,512 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | 64 | ## Hardware requirements ### KoGPT6B-ryan1.5b #### GPU The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT. * `32GB GPU RAM` in the required minimum memory size ### KoGPT6B-ryan1.5b-float16 #### GPU The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT. * half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere * `16GB GPU RAM` in the required minimum memory size ## Usage ### prompt ```bash python -m kogpt --help usage: KoGPT inference [-h] [--model MODEL] [--revision {KoGPT6B-ryan1.5b}] [--device {cpu,cuda}] [-d] KakaoBrain Korean(hangul) Generative Pre-Training Model optional arguments: -h, --help show this help message and exit --model MODEL huggingface repo (default:kakaobrain/kogpt) --revision {KoGPT6B-ryan1.5b} --device {cpu,cuda} (default:cuda) -d, --debug ``` ```bash python -m kogpt prompt> 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 temperature(0.8)> max_length(128)> 64 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상 prompt> ... ``` ### python ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained( 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b bos_token='[BOS]', eos_token='[EOS]', unk_token='[UNK]', pad_token='[PAD]', mask_token='[MASK]' ) model = AutoModelForCausalLM.from_pretrained( 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b pad_token_id=tokenizer.eos_token_id, torch_dtype='auto', low_cpu_mem_usage=True ).to(device='cuda', non_blocking=True) _ = model.eval() prompt = '인간처럼 생각하고, 행동하는 \'지능\'을 통해 인류가 이제까지 풀지 못했던' with torch.no_grad(): tokens = tokenizer.encode(prompt, return_tensors='pt').to(device='cuda', non_blocking=True) gen_tokens = model.generate(tokens, do_sample=True, temperature=0.8, max_length=64) generated = tokenizer.batch_decode(gen_tokens)[0] print(generated) # print: 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상 ``` ## Experiments ### In-context Few-Shots | Models | #params | NSMC (Acc.) | YNAT (F1) | KLUE-STS (F1) | |:--------------|--------:|------------:|----------:|--------------:| | HyperCLOVA[1] | 1.3B | 83.9 | 58.7 | 60.9 | | HyperCLOVA[1] | 6.9B | 83.8 | 67.5 | 59.3 | | HyperCLOVA[1] | 13.0B | 87.9 | 67.9 | 60.0 | | HyperCLOVA[1] | 39.0B | 88.0 | 71.4 | 61.6 | | HyperCLOVA[1] | 82.0B | **88.2** | 72.7 | **65.1** | | **Ours** | 6.0B | 87.8 | **78.0** | 64.3 | ### Finetuning / P-Tuning We have been reported to have issues(https://github.com/kakaobrain/kogpt/issues/17) with our downstream evaluation. The previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed. You can refer to the above issue link for the existing performance evaluation table and troubleshooting results. ## Limitations KakaoBrain `KoGPT` was trained on `rayn dataset`, a dataset known to contain profanity, lewd, political changed, and other harsh language. Therefore, `KoGPT` can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how `KoGPT` will response to particular prompts and offensive content without warning. Primarily Korean: `KoGPT` is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts. `KoGPT` by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data. [comment]: <> (If abnormal or socially unacceptable text is generated during testing, please send a "prompt" and the "generated text" to [[email protected]]&#40;mailto:[email protected]&#41;. ) 카카오브레인 `KoGPT`는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 `rayn dataset`으로 학습하였습니다. 따라서 `KoGPT`는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다. `KoGPT`는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다. 기본적으로 `KoGPT`는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다. [comment]: <> (테스트중에 발생한 비정상적인 혹은 사회적으로 용인되지 않는 텍스트가 생성된 경우 [[email protected]]&#40;mailto:[email protected]&#41;로 "prompt"와 "생성된 문장"을 함께 보내주시기 바랍니다.) ## Citation If you apply this library or model to any project and research, please cite our code: ``` @misc{kakaobrain2021kogpt, title = {KoGPT: KakaoBrain Korean(hangul) Generative Pre-trained Transformer}, author = {Ildoo Kim and Gunsoo Han and Jiyeon Ham and Woonhyuk Baek}, year = {2021}, howpublished = {\url{https://github.com/kakaobrain/kogpt}}, } ``` ## Contact This is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us. [[email protected]](mailto:[email protected]) ## License The `source code` of KakaoBrain `KoGPT` are licensed under [Apache 2.0](LICENSE.apache-2.0) License. The `pretrained wieghts` of KakaoBrain `KoGPT` are licensed under [CC-BY-NC-ND 4.0 License](https://creativecommons.org/licenses/by-nc-nd/4.0/) License. 카카오브레인 `KoGPT`의 `소스코드(source code)`는 [Apache 2.0](LICENSE.apache-2.0) 라이선스 하에 공개되어 있습니다. 카카오브레인 `KoGPT`의 `사전학습된 가중치(pretrained weights)`는 [CC-BY-NC-ND 4.0 라이선스](https://creativecommons.org/licenses/by-nc-nd/4.0/) 라이선스 하에 공개되어 있습니다. 모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 [Apache 2.0](LICENSE.apache-2.0), [LICENSE.cc-by-nc-nd-4.0](LICENSE.cc-by-nc-nd-4.0) 파일에서 확인하실 수 있습니다. ## References [1] [HyperCLOVA](https://arxiv.org/abs/2109.04650): Kim, Boseop, et al. "What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers." arXiv preprint arXiv:2109.04650 (2021).
luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2
luomingshuang
2022-10-13T06:43:37Z
0
3
null
[ "onnx", "region:us" ]
null
2022-05-19T14:32:27Z
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/349 # Pre-trained Transducer-Stateless2 models for the WenetSpeech dataset with icefall. The model was trained on the L subset of WenetSpeech with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2. ## Training procedure The main repositories are list below, we will update the training and decoding scripts with the update of version. k2: https://github.com/k2-fsa/k2 icefall: https://github.com/k2-fsa/icefall lhotse: https://github.com/lhotse-speech/lhotse * Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall. * Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above. ``` git clone https://github.com/k2-fsa/icefall cd icefall ``` * Preparing data. ``` cd egs/wenetspeech/ASR bash ./prepare.sh ``` * Training ``` export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" ./pruned_transducer_stateless2/train.py \ --world-size 8 \ --num-epochs 15 \ --start-epoch 0 \ --exp-dir pruned_transducer_stateless2/exp \ --lang-dir data/lang_char \ --max-duration 180 \ --valid-interval 3000 \ --model-warm-step 3000 \ --save-every-n 8000 \ --training-subset L ``` ## Evaluation results The decoding results (WER%) on WenetSpeech(dev, test-net and test-meeting) are listed below, we got this result by averaging models from epoch 9 to 10. The WERs are | | dev | test-net | test-meeting | comment | |------------------------------------|-------|----------|--------------|------------------------------------------| | greedy search | 7.80 | 8.75 | 13.49 | --epoch 10, --avg 2, --max-duration 100 | | modified beam search (beam size 4) | 7.76 | 8.71 | 13.41 | --epoch 10, --avg 2, --max-duration 100 | | fast beam search (1best) | 7.94 | 8.74 | 13.80 | --epoch 10, --avg 2, --max-duration 1500 | | fast beam search (nbest) | 9.82 | 10.98 | 16.37 | --epoch 10, --avg 2, --max-duration 600 | | fast beam search (nbest oracle) | 6.88 | 7.18 | 11.77 | --epoch 10, --avg 2, --max-duration 600 | | fast beam search (nbest LG) | 14.94 | 16.14 | 22.93 | --epoch 10, --avg 2, --max-duration 600 |
sxxyxn/kogpt2_reduced_vocab
sxxyxn
2022-10-13T06:39:24Z
200
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-13T04:33:40Z
--- language: ko tags: - gpt2 license: cc-by-nc-sa-4.0 --- For more details: https://github.com/SKT-AI/KoGPT2
azad-wolf-se/FExGAN-Meta
azad-wolf-se
2022-10-13T05:59:58Z
0
0
null
[ "Computer Vision", "Machine Learning", "Deep Learning", "en", "arxiv:2201.09061", "region:us" ]
null
2022-10-13T05:33:52Z
--- language: en tags: - Computer Vision - Machine Learning - Deep Learning --- # FExGAN-Meta: Facial Expression Generation with Meta-Humans ![FExGAN-Meta GIF Demo](https://github.com/azadlab/FExGAN-Meta/blob/master/FExGAN-Meta.gif?raw=true) This is the demo of the FExGAN-Meta proposed in the following article: [FExGAN-Meta: Facial Expression Generation with Meta-Humans](https://www.arxiv.com) FExGAN-Meta is the extension of [FExGAN](http://arxiv.org/abs/2201.09061). It takes input an image of Meta-Human and a vector of desired affect (e.g. angry,disgust,sad,surprise,joy,neutral and fear) and converts the input image to the desired emotion while keeping the identity of the original image. ![FExGAN-Meta GIF Demo](https://github.com/azadlab/FExGAN-Meta/blob/master/results.png?raw=true) # Requirements In order to run this you need following: * Python >= 3.7 * Tensorflow >= 2.6 * CUDA enabled GPU with memory >=8GB (e.g. GTX1070/GTX1080) # Usage Code https://www.github.com/azadlab/FExGAN-Meta # Citation If you use any part of this code or use ideas mentioned in the paper, please cite the following article. ``` @article{Siddiqui_FExGAN-Meta_2022, author = {{Siddiqui}, J. Rafid}, title = {{FExGAN-Meta: Facial Expression Generation with Meta-Humans}}, journal = {ArXiv e-prints}, archivePrefix = "arXiv", keywords = {Deep Learning, GAN, Facial Expressions}, year = {2022} url = {http://arxiv.org/abs/2201.09061}, } ```
Shebna/distilbert-base-uncased-finetuned-cola
Shebna
2022-10-13T05:25:14Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-12T12:10:49Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: train args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.542244787638552 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8054 - Matthews Correlation: 0.5422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5231 | 1.0 | 535 | 0.5317 | 0.4122 | | 0.348 | 2.0 | 1070 | 0.5014 | 0.5166 | | 0.2365 | 3.0 | 1605 | 0.5800 | 0.5305 | | 0.1833 | 4.0 | 2140 | 0.7610 | 0.5288 | | 0.1381 | 5.0 | 2675 | 0.8054 | 0.5422 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
format37/PPO-MountainCar-v0
format37
2022-10-13T04:43:04Z
3
0
stable-baselines3
[ "stable-baselines3", "MountainCar-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-13T04:42:45Z
--- library_name: stable-baselines3 tags: - MountainCar-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -151.80 +/- 16.12 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCar-v0 type: MountainCar-v0 --- # **PPO** Agent playing **MountainCar-v0** This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
eunyounglee/mBART_translator_json_3
eunyounglee
2022-10-13T04:12:26Z
102
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-13T02:37:26Z
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: mBART_translator_json_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART_translator_json_3 This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2480 - Bleu: 72.3119 - Gen Len: 38.8266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:| | No log | 1.0 | 444 | 1.5654 | 37.2408 | 115.2556 | | 4.2672 | 2.0 | 888 | 0.9088 | 58.5669 | 56.1363 | | 1.754 | 3.0 | 1332 | 0.6627 | 56.8038 | 56.2753 | | 1.2023 | 4.0 | 1776 | 0.5349 | 59.8569 | 35.3384 | | 0.9387 | 5.0 | 2220 | 0.4390 | 66.4894 | 46.4797 | | 0.7839 | 6.0 | 2664 | 0.3663 | 68.8133 | 46.3215 | | 0.664 | 7.0 | 3108 | 0.3127 | 67.7323 | 37.8041 | | 0.5833 | 8.0 | 3552 | 0.2790 | 69.3004 | 38.8193 | | 0.5833 | 9.0 | 3996 | 0.2543 | 70.0163 | 38.4707 | | 0.5206 | 10.0 | 4440 | 0.2480 | 72.3119 | 38.8266 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
mariolinml/deberta-v3-base_nli_2x_v0
mariolinml
2022-10-13T02:37:26Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-13T01:15:29Z
--- license: mit tags: - generated_from_trainer model-index: - name: deberta-v3-base_nli_2x_v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base_nli_2x_v0 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
eunyounglee/mBART_translator_json_2
eunyounglee
2022-10-13T02:12:34Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-12T02:56:11Z
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: mBART_translator_json_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART_translator_json_2 This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1203 - Bleu: 77.8658 - Gen Len: 36.1527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.7858 | 1.0 | 1912 | 0.6568 | 55.2937 | 75.6389 | | 0.994 | 2.0 | 3824 | 0.4015 | 71.3655 | 35.744 | | 0.7267 | 3.0 | 5736 | 0.2971 | 66.7522 | 34.5473 | | 0.5916 | 4.0 | 7648 | 0.2437 | 80.0233 | 37.4331 | | 0.502 | 5.0 | 9560 | 0.2072 | 80.9632 | 36.9833 | | 0.433 | 6.0 | 11472 | 0.1767 | 69.9384 | 36.6381 | | 0.3581 | 7.0 | 13384 | 0.1566 | 64.615 | 34.8954 | | 0.3244 | 8.0 | 15296 | 0.1382 | 77.5563 | 36.1736 | | 0.2815 | 9.0 | 17208 | 0.1259 | 76.1662 | 36.1548 | | 0.2555 | 10.0 | 19120 | 0.1203 | 77.8658 | 36.1527 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
din0s/t5-small-finetuned-en-to-it-hrs
din0s
2022-10-13T01:53:31Z
23
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-12T23:15:39Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-small-finetuned-en-to-it-hrs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-en-to-it-hrs This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1558 - Bleu: 9.8991 - Gen Len: 51.8287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 2.0084 | 1.0 | 1125 | 2.8804 | 4.4102 | 67.6067 | | 1.7918 | 2.0 | 2250 | 2.7757 | 6.1959 | 58.0313 | | 1.6944 | 3.0 | 3375 | 2.6845 | 6.9152 | 55.6953 | | 1.5955 | 4.0 | 4500 | 2.6219 | 7.3056 | 54.8213 | | 1.5304 | 5.0 | 5625 | 2.5659 | 7.9427 | 53.4173 | | 1.52 | 6.0 | 6750 | 2.5249 | 8.2049 | 53.678 | | 1.4934 | 7.0 | 7875 | 2.4853 | 8.6612 | 52.304 | | 1.4518 | 8.0 | 9000 | 2.4522 | 8.7991 | 52.6467 | | 1.4393 | 9.0 | 10125 | 2.4353 | 8.8251 | 52.7047 | | 1.4196 | 10.0 | 11250 | 2.4027 | 9.01 | 52.5387 | | 1.405 | 11.0 | 12375 | 2.3797 | 9.1513 | 52.0273 | | 1.3741 | 12.0 | 13500 | 2.3590 | 9.2401 | 52.3373 | | 1.3693 | 13.0 | 14625 | 2.3378 | 9.3611 | 52.1507 | | 1.3638 | 14.0 | 15750 | 2.3226 | 9.4213 | 52.2813 | | 1.3366 | 15.0 | 16875 | 2.3071 | 9.5199 | 52.1507 | | 1.3294 | 16.0 | 18000 | 2.2943 | 9.5296 | 51.9587 | | 1.3258 | 17.0 | 19125 | 2.2788 | 9.6231 | 51.5807 | | 1.3152 | 18.0 | 20250 | 2.2693 | 9.6586 | 51.8933 | | 1.3023 | 19.0 | 21375 | 2.2543 | 9.6762 | 51.5733 | | 1.3061 | 20.0 | 22500 | 2.2451 | 9.6926 | 51.6727 | | 1.3004 | 21.0 | 23625 | 2.2344 | 9.773 | 51.6527 | | 1.2839 | 22.0 | 24750 | 2.2242 | 9.7973 | 51.8113 | | 1.2869 | 23.0 | 25875 | 2.2161 | 9.8177 | 51.9073 | | 1.2819 | 24.0 | 27000 | 2.2115 | 9.8183 | 51.6707 | | 1.2642 | 25.0 | 28125 | 2.2037 | 9.7645 | 52.0853 | | 1.2685 | 26.0 | 29250 | 2.1984 | 9.7764 | 51.6927 | | 1.2609 | 27.0 | 30375 | 2.1934 | 9.7205 | 51.9647 | | 1.2585 | 28.0 | 31500 | 2.1834 | 9.8116 | 51.7373 | | 1.2564 | 29.0 | 32625 | 2.1811 | 9.8547 | 51.8553 | | 1.2563 | 30.0 | 33750 | 2.1766 | 9.8346 | 51.7293 | | 1.258 | 31.0 | 34875 | 2.1748 | 9.8204 | 51.6747 | | 1.2391 | 32.0 | 36000 | 2.1708 | 9.8485 | 51.7647 | | 1.2364 | 33.0 | 37125 | 2.1644 | 9.8503 | 51.6713 | | 1.2436 | 34.0 | 38250 | 2.1629 | 9.8457 | 51.76 | | 1.2408 | 35.0 | 39375 | 2.1614 | 9.8899 | 51.6893 | | 1.2564 | 36.0 | 40500 | 2.1591 | 9.8867 | 51.706 | | 1.2318 | 37.0 | 41625 | 2.1575 | 9.866 | 51.782 | | 1.2423 | 38.0 | 42750 | 2.1570 | 9.8756 | 51.8933 | | 1.2399 | 39.0 | 43875 | 2.1558 | 9.8871 | 51.7967 | | 1.2339 | 40.0 | 45000 | 2.1558 | 9.8991 | 51.8287 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.11.0
alibaba-pai/pai-ckbert-huge-zh
alibaba-pai
2022-10-13T01:42:48Z
141
3
transformers
[ "transformers", "pytorch", "megatron-bert", "bert", "fill-mask", "zh", "arxiv:2205.00258", "arxiv:2210.05287", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-11T03:27:18Z
--- language: zh pipeline_tag: fill-mask widget: - text: "巴黎是[MASK]国的首都。" - text: "生活的真谛是[MASK]。" tags: - bert license: apache-2.0 --- ## Chinese Kowledge-enhanced BERT (CKBERT) Knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis. Unlike English, there is a lack of high-performing open-source Chinese KEPLMs in the natural language processing (NLP) community to support various language understanding applications. For Chinese natural language processing, we provide three **Chinese Kowledge-enhanced BERT (CKBERT)** models named **pai-ckbert-bert-zh**, **pai-ckbert-large-zh** and **pai-ckbert-huge-zh**, from our **EMNLP 2022** paper named **Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training**. This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP ) ## Citation If you find the resource is useful, please cite the following papers in your work. - For the EasyNLP framework: ``` @article{easynlp, title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei}, publisher = {arXiv}, url = {https://arxiv.org/abs/2205.00258}, year = {2022} } ``` - For CKBERT: ``` @article{ckbert, title = {Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training}, author = {Zhang, Taolin and Dong, Junwei and Wang, Jianing and Wang, Chengyu and Wang, An and Liu, Yinghui and Huang, Jun and Li, Yong and He, Xiaofeng}, publisher = {EMNLP}, url = {https://arxiv.org/abs/2210.05287}, year = {2022} } ```
alibaba-pai/pai-ckbert-large-zh
alibaba-pai
2022-10-13T01:42:12Z
191
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "zh", "arxiv:2205.00258", "arxiv:2210.05287", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-01T08:26:30Z
--- language: zh pipeline_tag: fill-mask tags: - bert license: apache-2.0 --- ## Chinese Kowledge-enhanced BERT (CKBERT) Knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis. Unlike English, there is a lack of high-performing open-source Chinese KEPLMs in the natural language processing (NLP) community to support various language understanding applications. For Chinese natural language processing, we provide three **Chinese Kowledge-enhanced BERT (CKBERT)** models named **pai-ckbert-bert-zh**, **pai-ckbert-large-zh** and **pai-ckbert-huge-zh**, from our **EMNLP 2022** paper named **Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training**. This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP ) ## Citation If you find the resource is useful, please cite the following papers in your work. - For the EasyNLP framework: ``` @article{easynlp, title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei}, publisher = {arXiv}, url = {https://arxiv.org/abs/2205.00258}, year = {2022} } ``` - For CKBERT: ``` @article{ckbert, title = {Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training}, author = {Zhang, Taolin and Dong, Junwei and Wang, Jianing and Wang, Chengyu and Wang, An and Liu, Yinghui and Huang, Jun and Li, Yong and He, Xiaofeng}, publisher = {EMNLP}, url = {https://arxiv.org/abs/2210.05287}, year = {2022} } ```
tiagoblima/punctuation-finetune-mec
tiagoblima
2022-10-13T00:05:42Z
114
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-12T23:42:07Z
--- license: mit tags: - generated_from_trainer model-index: - name: punctuation-finetune-mec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # punctuation-finetune-mec This model is a fine-tuned version of [tiagoblima/punctuation-taboa-bert](https://huggingface.co/tiagoblima/punctuation-taboa-bert) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 411 | 0.1356 | 0.9791 | 0.7083 | 0.8220 | 0.9553 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
nvidia/slu_conformer_transformer_large_slurp
nvidia
2022-10-12T23:18:08Z
3
2
nemo
[ "nemo", "spoken-language-understanding", "speech-intent-classification", "speech-slot-filling", "SLURP", "Conformer", "Transformer", "pytorch", "NeMo", "en", "dataset:SLURP", "arxiv:2011.13205", "arxiv:2005.08100", "arxiv:1706.03762", "license:cc-by-4.0", "model-index", "region:us" ]
null
2022-08-29T14:17:42Z
--- language: - en library_name: nemo datasets: - SLURP thumbnail: null tags: - spoken-language-understanding - speech-intent-classification - speech-slot-filling - SLURP - Conformer - Transformer - pytorch - NeMo license: cc-by-4.0 model-index: - name: slu_conformer_transformer_large_slurp results: - task: name: Slot Filling type: slot-filling dataset: name: SLURP type: slurp split: test metrics: - name: F1 type: f1 value: 82.27 - task: name: Intent Classification type: intent-classification dataset: name: SLURP type: slurp split: test metrics: - name: Accuracy type: acc value: 90.14 --- # NeMo End-to-End Speech Intent Classification and Slot Filling ## Model Overview This model performs joint intent classification and slot filling, directly from audio input. The model treats the problem as an audio-to-text problem, where the output text is the flattened string representation of the semantics annotation. The model is trained on the SLURP dataset [1]. ## Model Architecture The model is has an encoder-decoder architecture, where the encoder is a Conformer-Large model [2], and the decoder is a three-layer Transformer Decoder [3]. We use the Conformer encoder pretrained on NeMo ASR-Set (details [here](https://ngc.nvidia.com/models/nvidia:nemo:stt_en_conformer_ctc_large)), while the decoder is trained from scratch. A start-of-sentence (BOS) and an end-of-sentence (EOS) tokens are added to each sentence. The model is trained end-to-end by minimizing the negative log-likelihood loss with teacher forcing. During inference, the prediction is generated by beam search, where a BOS token is used to trigger the generation process. ## Training The NeMo toolkit [4] was used for training the models for around 100 epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/slu/slurp/run_slurp_train.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/slu/slurp/configs/conformer_transformer_large_bpe.yaml). The tokenizers for these models were built using the semantics annotations of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). We use a vocabulary size of 58, including the BOS, EOS and padding tokens. Details on how to train the model can be found [here](https://github.com/NVIDIA/NeMo/blob/main/examples/slu/speech_intent_slot/README.md). ### Datasets The model is trained on the combined real and synthetic training sets of the SLURP dataset. ## Performance | | | | | **Intent (Scenario_Action)** | | **Entity** | | | **SLURP Metrics** | | |-------|--------------------------------------------------|----------------|--------------------------|------------------------------|---------------|------------|--------|--------------|-------------------|---------------------| |**Version**| **Model** | **Params (M)** | **Pretrained** | **Accuracy** | **Precision** | **Recall** | **F1** | **Precsion** | **Recall** | **F1** | |1.13.0| Conformer-Transformer-Large | 127 | NeMo ASR-Set 3.0 | 90.14 | 78.95 | 74.93 | 76.89 | 84.31 | 80.33 | 82.27 | |Baseline| Conformer-Transformer-Large | 127 | None | 72.56 | 43.19 | 43.5 | 43.34 | 53.59 | 53.92 | 53.76 | Note: during inference, we use beam size of 32, and a temperature of 1.25. ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used on another dataset with the same annotation format. ### Automatically load the model from NGC ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.SLUIntentSlotBPEModel.from_pretrained(model_name="slu_conformer_transformer_large_slurp") ``` ### Predict intents and slots with this model ```shell python [NEMO_GIT_FOLDER]/examples/slu/speech_intent_slot/eval_utils/inference.py \ pretrained_name="slu_conformer_transformer_slurp" \ audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" \ sequence_generator.type="<'beam' OR 'greedy' FOR BEAM/GREEDY SEARCH>" \ sequence_generator.beam_size="<SIZE OF BEAM>" \ sequence_generator.temperature="<TEMPERATURE FOR BEAM SEARCH>" ``` ### Input This model accepts 16000 Hz Mono-channel Audio (wav files) as input. ### Output This model provides the intent and slot annotaions as a string for a given audio sample. ## Limitations Since this model was trained on only the SLURP dataset [1], the performance of this model might degrade on other datasets. ## References [1] [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/abs/2011.13205) [2] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) [3] [Attention Is All You Need](https://arxiv.org/abs/1706.03762?context=cs) [4] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
xdai/mimic_longformer_base
xdai
2022-10-12T22:56:32Z
204
0
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "Clinical notes", "Discharge summaries", "en", "dataset:MIMIC-III", "arxiv:2204.06683", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-06-12T04:47:58Z
--- language: en license: cc-by-4.0 tags: - Clinical notes - Discharge summaries - longformer datasets: - MIMIC-III --- * Continue pre-training RoBERTa-base using discharge summaries from MIMIC-III datasets. * Details can be found in the following paper > Xiang Dai and Ilias Chalkidis and Sune Darkner and Desmond Elliott. 2022. Revisiting Transformer-based Models for Long Document Classification. (https://arxiv.org/abs/2204.06683) * Important hyper-parameters | | | |---|---| | Max sequence | 4096 | | Batch size | 8 | | Learning rate | 5e-5 | | Training epochs | 6 | | Training time | 130 GPU-hours |
tiagoblima/punctuation-taboa-bert
tiagoblima
2022-10-12T22:39:49Z
112
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:tapaco", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-12T18:52:16Z
--- license: mit tags: - generated_from_trainer datasets: - tapaco metrics: - precision - recall - f1 - accuracy model-index: - name: punctuation-taboa-bert results: - task: name: Token Classification type: token-classification dataset: name: tapaco type: tapaco config: all_languages split: train args: all_languages metrics: - name: Precision type: precision value: 0.9849559686888454 - name: Recall type: recall value: 0.9836325882496642 - name: F1 type: f1 value: 0.9842938336490864 - name: Accuracy type: accuracy value: 0.9945622875893589 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # punctuation-taboa-bert This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the tapaco dataset. It achieves the following results on the evaluation set: - Loss: 0.0181 - Precision: 0.9850 - Recall: 0.9836 - F1: 0.9843 - Accuracy: 0.9946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0272 | 1.0 | 17438 | 0.0181 | 0.9850 | 0.9836 | 0.9843 | 0.9946 | | 0.0234 | 2.0 | 34876 | 0.0196 | 0.9870 | 0.9853 | 0.9862 | 0.9948 | | 0.0092 | 3.0 | 52314 | 0.0233 | 0.9874 | 0.9853 | 0.9864 | 0.9950 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
IShallRiseAgain/Goosebumps
IShallRiseAgain
2022-10-12T21:23:50Z
0
1
null
[ "region:us" ]
null
2022-10-12T20:57:23Z
a quick little model I did. Probably not going to update this. prompt is "GoosebumpsCover book_cover"
edvinkxs/finetuning-sentiment-model-3000-samples
edvinkxs
2022-10-12T20:13:12Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-05T12:57:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8866666666666667 - name: F1 type: f1 value: 0.8903225806451613 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3498 - Accuracy: 0.8867 - F1: 0.8903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
EddyGiusepe/bert-finetuned-ner
EddyGiusepe
2022-10-12T20:00:09Z
125
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-12T19:36:35Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0602 - Precision: 0.9335 - Recall: 0.9517 - F1: 0.9425 - Accuracy: 0.9864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0852 | 1.0 | 1756 | 0.0685 | 0.9208 | 0.9367 | 0.9287 | 0.9829 | | 0.0336 | 2.0 | 3512 | 0.0612 | 0.9281 | 0.9495 | 0.9387 | 0.9856 | | 0.0181 | 3.0 | 5268 | 0.0602 | 0.9335 | 0.9517 | 0.9425 | 0.9864 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.1
siddharth963/vit-base-patch16-224-in21k-finetuned-cassava3
siddharth963
2022-10-12T19:37:08Z
216
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:image_folder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-12T17:13:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: vit-base-patch16-224-in21k-finetuned-cassava3 results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.8855140186915887 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-cassava3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.3419 - Accuracy: 0.8855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5624 | 0.99 | 133 | 0.5866 | 0.8166 | | 0.4717 | 1.99 | 266 | 0.4245 | 0.8692 | | 0.4105 | 2.99 | 399 | 0.3708 | 0.8811 | | 0.3753 | 3.99 | 532 | 0.3646 | 0.8787 | | 0.2997 | 4.99 | 665 | 0.3655 | 0.8780 | | 0.3176 | 5.99 | 798 | 0.3545 | 0.8822 | | 0.2849 | 6.99 | 931 | 0.3441 | 0.8850 | | 0.2931 | 7.99 | 1064 | 0.3419 | 0.8855 | | 0.27 | 8.99 | 1197 | 0.3419 | 0.8848 | | 0.2927 | 9.99 | 1330 | 0.3403 | 0.8853 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
format37/DQN-MountainCar-v0
format37
2022-10-12T19:17:23Z
4
0
stable-baselines3
[ "stable-baselines3", "MountainCar-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-12T19:17:01Z
--- library_name: stable-baselines3 tags: - MountainCar-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: -200.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: MountainCar-v0 type: MountainCar-v0 --- # **DQN** Agent playing **MountainCar-v0** This is a trained model of a **DQN** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code
dsilin/detok-deberta-xl
dsilin
2022-10-12T18:56:34Z
163
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: en widget: - text: "They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ," - text: "\" We 'll talk go by now ; \" says Shucksmith ;" - text: "\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said ." --- This model can be used to more accurately detokenize the moses tokenizer (it does a better job with certain lossy quotes and things) batched usage: ```python sentences = [ "They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ,", "\" We 'll talk go by now ; \" says Shucksmith ;", "He 'll enjoy it more now that this he be dead , if put 'll pardon the expression .", "I think you 'll be amazed at this way it finds ,", "Michigan voters ^ are so frightened of fallen in permanent economic collapse that they 'll grab onto anything .", "You 'll finding outs episode 4 .", "\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said .", "You can look at the things I 'm saying about my record and about the events of campaign and history and you 'll find if now and and then I miss a words or I get something slightly off , I 'll correct it , acknowledge where it are wrong .", "Wonder if 'll alive to see .", "We 'll have to combine and a numbered of people ." ] def sentences_to_input_tokens(sentences): all_tokens = [] max_length = 0 sents_tokens = [] iids = tokenizer(sentences) for sent_tokens in iids['input_ids']: sents_tokens.append(sent_tokens) if len(sent_tokens) > max_length: max_length = len(sent_tokens) attention_mask = [1] * len(sent_tokens) pos_ids = list(range(len(sent_tokens))) encoding = { "iids": sent_tokens, "am": attention_mask, "pos": pos_ids } all_tokens.append(encoding) input_ids = [] attention_masks = [] position_ids = [] for i in range(len(all_tokens)): encoding = all_tokens[i] pad_len = max_length - len(encoding['iids']) attention_masks.append(encoding['am'] + [0] * pad_len) position_ids.append(encoding['pos'] + [0] * pad_len) input_ids.append(encoding['iids'] + [tokenizer.pad_token_id] * pad_len) encoding = { "input_ids": torch.tensor(input_ids).to(device), "attention_mask": torch.tensor(attention_masks).to(device), "position_ids": torch.tensor(position_ids).to(device) } return encoding, sents_tokens def run_token_predictor_sentences(sentences): encoding, at = sentences_to_input_tokens(sentences) predictions = model(**encoding)[0].cpu().tolist() outstrs = [] for i in range(len(predictions)): outstr = "" for p in zip(tokenizer.convert_ids_to_tokens(at[i][1:-1]), predictions[i][1:-1]): if not "▁" in p[0]: outstr+=p[0] else: if p[1][0] > p[1][1]: outstr+=p[0].replace("▁", " ") else: outstr+=p[0].replace("▁", "") outstrs.append(outstr.strip()) return outstrs outs = run_token_predictor_sentences(sentences) for p in zip(outs, sentences): print(p[1]) print(p[0]) print('\n------\n') ```
stevhliu/my_awesome_swag_model
stevhliu
2022-10-12T18:53:50Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "dataset:swag", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-10-12T17:52:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: my_awesome_swag_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_swag_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.5192 - Accuracy: 0.7981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6873 | 1.0 | 4597 | 0.5192 | 0.7981 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
huggingtweets/nickjr-nickschedules
huggingtweets
2022-10-12T18:25:09Z
130
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-12T18:23:40Z
--- language: en thumbnail: http://www.huggingtweets.com/nickjr-nickschedules/1665599104651/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1478805340212838413/YAJM_fei_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1253906860727504896/S2cZe8AZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nick Jr. & Nickelodeon Crave</div> <div style="text-align: center; font-size: 14px;">@nickjr-nickschedules</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nick Jr. & Nickelodeon Crave. | Data | Nick Jr. | Nickelodeon Crave | | --- | --- | --- | | Tweets downloaded | 3250 | 3241 | | Retweets | 54 | 2414 | | Short tweets | 755 | 0 | | Tweets kept | 2441 | 827 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/v7q3skmm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nickjr-nickschedules's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/b7w0g8eh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/b7w0g8eh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nickjr-nickschedules') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/roizmangbn
huggingtweets
2022-10-12T18:17:23Z
130
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-12T18:07:05Z
--- language: en thumbnail: http://www.huggingtweets.com/roizmangbn/1665598638215/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1355620273/RJen_003_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Евгений Ройзман</div> <div style="text-align: center; font-size: 14px;">@roizmangbn</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Евгений Ройзман. | Data | Евгений Ройзман | | --- | --- | | Tweets downloaded | 3199 | | Retweets | 143 | | Short tweets | 862 | | Tweets kept | 2194 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/m8zanrln/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @roizmangbn's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yjc4ah6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yjc4ah6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/roizmangbn') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
amoux/roberta-cord19-1M7k
amoux
2022-10-12T17:59:56Z
112
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.githubassets.com/images/icons/emoji/unicode/2695.png widget: - text: Lung infiltrates cause significant morbidity and mortality in immunocompromised <mask>. - text: Tuberculosis appears to be an important <mask> in endemic regions especially in the non-HIV, non-hematologic malignancy group. - text: For vector-transmitted diseases this places huge significance on vector mortality rates as vectors usually don't <mask> an infection and instead remain infectious for life. - text: The lung lesions were characterized by bronchointerstitial pneumonia with accumulation of neutrophils, macrophages and necrotic debris in <mask> and bronchiolar lumens and peribronchiolar/perivascular infiltration of inflammatory cells. --- # roberta-cord19-1M7k ![](https://github.githubassets.com/images/icons/emoji/unicode/2695.png) > This model is based on ***RoBERTa*** and was pre-trained on 1.7 million sentences. The training corpus was papers taken from *Semantic Scholar*'s CORD-19 historical releases. Corpus size is `13k` papers, `~60M` tokens. I used the full-text `"body_text"` of the papers in training (details below). #### Usage ```python from transformers import pipeline from transformers import RobertaTokenizerFast, RobertaForMaskedLM tokenizer = RobertaTokenizerFast.from_pretrained("amoux/roberta-cord19-1M7k") model = RobertaForMaskedLM.from_pretrained("amoux/roberta-cord19-1M7k") fillmask = pipeline("fill-mask", model=model, tokenizer=tokenizer) text = "Lung infiltrates cause significant morbidity and mortality in immunocompromised patients." masked_text = text.replace("patients", tokenizer.mask_token) predictions = fillmask(masked_text, top_k=3) ``` - Predicted tokens ```bash [{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised patients.</s>', 'score': 0.6273621320724487, 'token': 660, 'token_str': 'Ġpatients'}, {'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised individuals.</s>', 'score': 0.19800445437431335, 'token': 1868, 'token_str': 'Ġindividuals'}, {'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised animals.</s>', 'score': 0.022069649770855904, 'token': 1471, 'token_str': 'Ġanimals'}] ``` ## Dataset - About - name: *CORD-19: The Covid-19 Open Research Dataset* - date: *2020-03-18* - md5 | sha1: `a36fe181 | 8fbea927` - text-key: `body_text` - subsets (*total*: `13,202`): - *biorxiv_medrxiv*: `803` - *comm_use_subset*: `9000` - *pmc_custom_license*: `1426` - *noncomm_use_subset*: `1973` - Splits (*ratio: 0.9*) - sentences used for training: `1,687,124` - sentences used for evaluation: `187,459` - Total training steps: `210,890` - Total evaluation steps: `23,433` ## Parameters - Data - block_size: `256` - Training - per_device_train_batch_size: `8` - per_device_eval_batch_size: `8` - gradient_accumulation_steps: `2` - learning_rate: `5e-5` - num_train_epochs: `2` - fp16: `True` - fp16_opt_level: `'01'` - seed: `42` - Output - global_step: `210890` - training_loss: `3.5964575726682155` ## Evaluation - Perplexity: `17.469366079957922` ### Citation > Allen Institute CORD-19 [Historical Releases](https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases.html) ``` @article{Wang2020CORD19TC, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ```
EdBianchi/GPT-2-finetuned-papers
EdBianchi
2022-10-12T17:50:37Z
59
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-12T14:56:54Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: EdBianchi/GPT-2-finetuned-papers results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # EdBianchi/GPT-2-finetuned-papers This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.4718 - Validation Loss: 2.2371 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.4718 | 2.2371 | 0 | ### Framework versions - Transformers 4.21.3 - TensorFlow 2.10.0 - Datasets 2.4.0 - Tokenizers 0.12.1
TRoboto/masc_kenlm_3grams_lm
TRoboto
2022-10-12T17:34:37Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# MASC The scorer model can be found under files with the name of `masc.scorer` More info on how the scorer was produced: https://deepspeech.readthedocs.io/en/master/Scorer.html
Splend1dchan/wav2vecu2-t5lephone-small-NMSQA
Splend1dchan
2022-10-12T15:44:48Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-06T16:39:04Z
wav2vecu2 -> phoneme from answer timespan, get answer phoneme train NMSQA tasks with context phoneme as input and answer phoneme as output Results Avg AOS: 0.6586811819639634 Avg FF1: 0.697483897268189 Exact Match: 0.2042313923568093
rgoldstein/autotrain-movie-rationales-1734060527
rgoldstein
2022-10-12T14:34:03Z
99
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:rgoldstein/autotrain-data-movie-rationales", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-10-12T14:30:47Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - rgoldstein/autotrain-data-movie-rationales co2_eq_emissions: emissions: 5.912842155368309 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1734060527 - CO2 Emissions (in grams): 5.9128 ## Validation Metrics - Loss: 0.198 - Accuracy: 0.934 - Precision: 0.937 - Recall: 0.931 - AUC: 0.983 - F1: 0.934 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/rgoldstein/autotrain-movie-rationales-1734060527 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("rgoldstein/autotrain-movie-rationales-1734060527", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("rgoldstein/autotrain-movie-rationales-1734060527", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
shuojiang/q-Taxi-v3
shuojiang
2022-10-12T14:13:24Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-12T14:13:16Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.72 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="shuojiang/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
shuojiang/q-FrozenLake-v1-4x4-noSlippery
shuojiang
2022-10-12T14:09:18Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-12T14:05:52Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="shuojiang/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
ntaka/xlm-roberta-base-finetuned-panx-de
ntaka
2022-10-12T13:32:08Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-08T13:42:20Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.863677639046538 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1343 - F1: 0.8637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 | | 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 | | 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
gsarti/it5-efficient-small-el32-ilgiornale-to-repubblica
gsarti
2022-10-12T13:19:18Z
105
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "efficient", "ilgiornale", "repubblica", "style-transfer", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "arxiv:2109.10686", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-28T14:12:35Z
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - efficient - ilgiornale - repubblica - style-transfer widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore - headline-headline-consistency-classifier - headline-article-consistency-classifier model-index: - name: it5-efficient-small-el32-ilgiornale-to-repubblica results: - task: type: headline-style-transfer-ilgiornale-to-repubblica name: "Headline style transfer (Il Giornale to Repubblica)" dataset: type: gsarti/change_it name: "CHANGE-IT" metrics: - type: rouge1 value: 0.286 name: "Test Rouge1" - type: rouge2 value: 0.099 name: "Test Rouge2" - type: rougeL value: 0.253 name: "Test RougeL" - type: bertscore value: 0.422 name: "Test BERTScore" - type: headline-headline-consistency-classifier value: 0.836 name: "Test Headline-Headline Consistency Accuracy" - type: headline-article-consistency-classifier value: 0.763 name: "Test Headline-Article Consistency Accuracy" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹 *Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!* This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model. A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate a headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines g2r = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-ilgiornale-to-repubblica') g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
gsarti/it5-efficient-small-el32-informal-to-formal
gsarti
2022-10-12T13:12:49Z
107
1
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "style-transfer", "efficient", "formality-style-transfer", "it", "dataset:yahoo/xformal_it", "arxiv:2203.03759", "arxiv:2109.10686", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-28T13:48:32Z
--- language: - it license: apache-2.0 tags: - italian - sequence-to-sequence - style-transfer - efficient - formality-style-transfer datasets: - yahoo/xformal_it widget: - text: "maronn qualcuno mi spieg' CHECCOSA SUCCEDE?!?!" - text: "wellaaaaaaa, ma fraté sei proprio troppo simpatiko, grazieeee!!" - text: "nn capisco xke tt i ragazzi lo fanno" - text: "IT5 è SUPERMEGA BRAVISSIMO a capire tt il vernacolo italiano!!!" metrics: - rouge - bertscore model-index: - name: it5-efficient-small-el32-informal-to-formal results: - task: type: formality-style-transfer name: "Informal-to-formal Style Transfer" dataset: type: xformal_it name: "XFORMAL (Italian Subset)" metrics: - type: rouge1 value: 0.430 name: "Avg. Test Rouge1" - type: rouge2 value: 0.221 name: "Avg. Test Rouge2" - type: rougeL value: 0.408 name: "Avg. Test RougeL" - type: bertscore value: 0.630 name: "Avg. Test BERTScore" --- # IT5 Cased Small Efficient EL32 for Informal-to-formal Style Transfer 🧐 *Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!* This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model. A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines i2f = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-informal-to-formal') i2f("nn capisco xke tt i ragazzi lo fanno") >>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
gsarti/it5-efficient-small-el32-question-generation
gsarti
2022-10-12T13:09:07Z
106
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "Italian", "efficient", "sequence-to-sequence", "question-generation", "squad_it", "it", "dataset:squad_it", "arxiv:2203.03759", "arxiv:2109.10686", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-28T14:12:07Z
--- language: - it license: apache-2.0 datasets: - squad_it tags: - Italian - efficient - sequence-to-sequence - question-generation - squad_it - text2text-generation widget: - text: "Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una \"grande pestilenza nell' aria\". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola \"peste\" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia" - text: "Il 14 aprile 2011, ABC ha annullato le lunghe opere di sapone All My Children e One Life to Live dopo 41 e 43 anni in onda, rispettivamente (in seguito al contraccolpo dei tifosi, ABC ha venduto i diritti ad entrambi gli spettacoli a Prospect Park, che alla fine ha rilanciato i saponi su Hulu per un' ulteriore stagione nel 2013 e con entrambe le società che si citano in giudizio per accuse di interferenza con il processo di rilancio degli spettacoli, mancato pagamento delle tasse di licenza. Il talk/lifestyle show che ha sostituito One Life to Live, The Revolution, non è riuscito a generare giudizi soddisfacenti ed è stato a sua volta annullato dopo soli sette mesi. La stagione 2011-12 ha visto l' ABC cadere al quarto posto nel 18-49 demografico nonostante rinnovando una manciata di nuovi spettacoli (compresi i drammi matricole Scandal, Revenge e Once Upon a Time) per la seconda stagione. Risposta: Hulu" - text: "L' American Broadcasting Company (ABC) (stlized nel suo logo come abc dal 1957) è una rete televisiva commerciale americana trasmissione televisiva che è di proprietà del Disney-ABC Television Group, una controllata della divisione Disney Media Networks di The Walt Disney Company. La rete fa parte delle grandi reti televisive Big Three. La rete ha sede a Columbus Avenue e West 66th Street a Manhattan, con ulteriori uffici e stabilimenti di produzione a New York City, Los Angeles e Burbank, California. Risposta: Manhattan" - text: "La disobbedienza civile non rivoluzionaria è una semplice disobbedienza delle leggi sulla base del fatto che sono giudicate \"sbagliate\" da una coscienza individuale, o come parte di uno sforzo per rendere alcune leggi inefficaci, per causarne l' abrogazione, o per esercitare pressioni per ottenere i propri desideri politici su qualche altra questione. La disobbedienza civile rivoluzionaria è più che altro un tentativo attivo di rovesciare un governo (o di cambiare le tradizioni culturali, i costumi sociali, le credenze religiose, ecc. La rivoluzione non deve necessariamente essere politica, cioè \"rivoluzione culturale\", implica semplicemente un cambiamento radicale e diffuso in una sezione del tessuto sociale). Gli atti di Gandhi sono stati descritti come disobbedienza civile rivoluzionaria. È stato affermato che gli ungheresi sotto Ferenc Deák hanno diretto una disobbedienza civile rivoluzionaria contro il governo austriaco. Thoreau ha anche scritto di disobbedienza civile realizzando \"rivoluzione pacifica\". Howard Zinn, Harvey Wheeler e altri hanno identificato il diritto sposato nella Dichiarazione d' Indipendenza di \"alterare o abolire\" un governo ingiusto come principio di disobbedienza civile. Risposta: Ferenc Deák" metrics: - rouge - bertscore model-index: - name: it5-efficient-small-el32-question-generation results: - task: type: question-generation name: "Question generation" dataset: type: squad_it name: "SQuAD-IT" metrics: - type: rouge1 value: 0.382 name: "Test Rouge1" - type: rouge2 value: 0.201 name: "Test Rouge2" - type: rougeL value: 0.357 name: "Test RougeL" - type: bertscore value: 0.517 name: "Test BERTScore" --- # IT5 Cased Small Efficient EL32 for Question Generation 💭 🇮🇹 *Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!* This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model. A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-question-generation') qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia") >>> [{"generated_text": "Per chi è stato redatto il referto medico?"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-question-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-question-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7.0 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
Kasturi135/finetuning-sentiment-model-3000-samples
Kasturi135
2022-10-12T13:05:39Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-12T08:50:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8666666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3357 - Accuracy: 0.8667 - F1: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
gsarti/it5-efficient-small-el32-headline-generation
gsarti
2022-10-12T12:59:39Z
108
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "efficient", "headline-generation", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "arxiv:2109.10686", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-28T14:11:12Z
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - efficient - headline-generation widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore model-index: - name: it5-efficient-small-el32-headline-generation results: - task: type: headline-generation name: "Headline generation" dataset: type: headgen_it name: "HeadGen-IT" metrics: - type: rouge1 value: 0.299 name: "Test Rouge1" - type: rouge2 value: 0.108 name: "Test Rouge2" - type: rougeL value: 0.264 name: "Test RougeL" - type: bertscore value: 0.427 name: "Test BERTScore" --- # IT5 Cased Small Efficient EL32 for News Headline Generation 🗞️ 🇮🇹 *Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!* This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model. A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines hg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-headline-generation') hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-headline-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-headline-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
csam/finetuning-sentiment-model-3000-samples
csam
2022-10-12T11:14:28Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-12T11:01:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.88 - name: F1 type: f1 value: 0.880794701986755 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2913 - Accuracy: 0.88 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
Linus4Lyf/my-awesome-setfit-model
Linus4Lyf
2022-10-12T11:00:56Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-12T11:00:44Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer
Yoshiki
2022-10-12T10:09:17Z
7
0
espnet
[ "espnet", "audio", "speech-enhancement-recognition", "en", "dataset:chime4", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2022-10-12T09:15:17Z
--- tags: - espnet - audio - speech-enhancement-recognition language: en datasets: - chime4 license: cc-by-4.0 --- ## ESPnet2 EnhS2T model ### `Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer` This model was trained by Yoshiki using chime4 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet 8ed83f45d5aa2ca6b3635e44b9c29afb9b5fb600 pip install -e . cd egs2/chime4/enh_asr1 ./run.sh --skip_data_prep false --skip_train true --download_model Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Tue Oct 11 02:40:53 UTC 2022` - python version: `3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]` - espnet version: `espnet 202207` - pytorch version: `pytorch 1.10.1+cu111` - Git hash: `` - Commit date: `` ## enh_asr_train_enh_asr_wpd_init_noenhloss_wavlm_conformer_raw_en_char ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_real_isolated_6ch_track|1640|27119|98.8|0.9|0.2|0.2|1.3|16.2| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_simu_isolated_6ch_track|1640|27120|98.9|0.9|0.2|0.1|1.3|15.2| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_real_isolated_6ch_track|1320|21409|98.4|1.4|0.2|0.2|1.8|20.6| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_simu_isolated_6ch_track|1320|21416|98.9|1.0|0.2|0.1|1.2|15.2| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_real_isolated_6ch_track|1640|160390|99.7|0.1|0.2|0.2|0.5|16.2| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_simu_isolated_6ch_track|1640|160400|99.7|0.1|0.2|0.1|0.5|15.2| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_real_isolated_6ch_track|1320|126796|99.5|0.2|0.3|0.2|0.7|20.6| |decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_simu_isolated_6ch_track|1320|126812|99.7|0.2|0.2|0.1|0.5|15.2| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## EnhS2T config <details><summary>expand</summary> ``` config: conf/tuning/train_enh_asr_wpd_init_noenhloss_wavlm_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/enh_asr_train_enh_asr_wpd_init_noenhloss_wavlm_conformer_raw_en_char ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 31 patience: 10 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max - - train - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 1 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: - ../enh1/exp/enh_train_enh_beamformer_wpd_ci_sdr_shorttap_raw/valid.loss.best.pth:separator:enh_model.separator - ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:frontend:s2t_model.frontend - ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:preencoder:s2t_model.preencoder - ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:encoder:s2t_model.encoder - ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:ctc:s2t_model.ctc - ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:decoder:s2t_model.decoder ignore_init_mismatch: false freeze_param: - s2t_model.frontend.upstream num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/enh_asr_stats_raw_en_char/train/speech_shape - exp/enh_asr_stats_raw_en_char/train/speech_ref1_shape - exp/enh_asr_stats_raw_en_char/train/text_spk1_shape.char valid_shape_file: - exp/enh_asr_stats_raw_en_char/valid/speech_shape - exp/enh_asr_stats_raw_en_char/valid/speech_ref1_shape - exp/enh_asr_stats_raw_en_char/valid/text_spk1_shape.char batch_type: folded valid_batch_type: null fold_length: - 80000 - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr05_multi_isolated_6ch_track/wav.scp - speech - sound - - dump/raw/tr05_multi_isolated_6ch_track/spk1.scp - speech_ref1 - sound - - dump/raw/tr05_multi_isolated_6ch_track/text_spk1 - text_spk1 - text valid_data_path_and_name_and_type: - - dump/raw/dt05_multi_isolated_6ch_track/wav.scp - speech - sound - - dump/raw/dt05_multi_isolated_6ch_track/spk1.scp - speech_ref1 - sound - - dump/raw/dt05_multi_isolated_6ch_track/text_spk1 - text_spk1 - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: sgd optim_conf: lr: 0.001 momentum: 0.9 scheduler: null scheduler_conf: {} token_list: data/en_token_list/char/tokens.txt src_token_list: null init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true enh_criterions: - name: ci_sdr conf: filter_length: 512 wrapper: fixed_order wrapper_conf: weight: 0.1 diar_num_spk: null diar_input_size: null enh_model_conf: stft_consistency: false loss_type: mask_mse mask_type: null asr_model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false st_model_conf: stft_consistency: false loss_type: mask_mse mask_type: null diar_model_conf: diar_weight: 1.0 attractor_weight: 1.0 subtask_series: - enh - asr model_conf: calc_enh_loss: false bypass_enh_prob: 0.0 use_preprocessor: true token_type: char bpemodel: null src_token_type: bpe src_bpemodel: null non_linguistic_symbols: data/nlsyms.txt cleaner: null g2p: null text_name: - text_spk1 enh_encoder: stft enh_encoder_conf: n_fft: 512 win_length: 400 hop_length: 128 use_builtin_complex: false enh_separator: wpe_beamformer enh_separator_conf: num_spk: 1 loss_type: spectrum use_wpe: false wnet_type: blstmp wlayers: 3 wunits: 512 wprojs: 512 wdropout_rate: 0.0 taps: 3 delay: 3 use_dnn_mask_for_wpe: true use_beamformer: true bnet_type: blstmp blayers: 3 bunits: 512 bprojs: 512 badim: 320 ref_channel: 4 use_noise_mask: true beamformer_type: wpd_souden bdropout_rate: 0.0 enh_decoder: stft enh_decoder_conf: n_fft: 512 win_length: 400 hop_length: 128 enh_mask_module: multi_mask enh_mask_module_conf: {} frontend: s3prl frontend_conf: frontend_conf: upstream: wavlm_large download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 100 num_freq_mask: 4 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} asr_preencoder: linear asr_preencoder_conf: input_size: 1024 output_size: 80 asr_encoder: conformer asr_encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d2 normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 15 asr_postencoder: null asr_postencoder_conf: {} asr_decoder: transformer asr_decoder_conf: input_layer: embed attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 st_preencoder: null st_preencoder_conf: {} st_encoder: rnn st_encoder_conf: {} st_postencoder: null st_postencoder_conf: {} st_decoder: rnn st_decoder_conf: {} st_extra_asr_decoder: rnn st_extra_asr_decoder_conf: {} st_extra_mt_decoder: rnn st_extra_mt_decoder_conf: {} diar_frontend: default diar_frontend_conf: {} diar_specaug: null diar_specaug_conf: {} diar_normalize: utterance_mvn diar_normalize_conf: {} diar_encoder: transformer diar_encoder_conf: {} diar_decoder: linear diar_decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: {} diar_attractor: null diar_attractor_conf: {} required: - output_dir version: '202207' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
troesy/toxicbert-hatexplain-label-all-tokens-False-3epoch
troesy
2022-10-12T09:58:31Z
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-12T09:45:24Z
--- tags: - generated_from_trainer model-index: - name: toxicbert-hatexplain-label-all-tokens-False-3epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # toxicbert-hatexplain-label-all-tokens-False-3epoch This model is a fine-tuned version of [unitary/toxic-bert](https://huggingface.co/unitary/toxic-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 174 | 0.1816 | | No log | 2.0 | 348 | 0.1751 | | 0.1869 | 3.0 | 522 | 0.1779 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
anton-l/common_voice_generator
anton-l
2022-10-12T09:40:09Z
0
0
null
[ "region:us" ]
null
2022-04-29T15:53:56Z
## Common voice release generator 1. Copy the latest release id from the `RELEASES` dict in https://github.com/common-voice/common-voice/blob/main/web/src/components/pages/datasets/releases.ts to the `VERSIONS` variable in `generate_datasets.py`. 2. Copy the languages from https://github.com/common-voice/common-voice/blob/release-v1.78.0/web/locales/en/messages.ftl (replacing `release-v1.78.0` with the latest version tag) to the `languages.ftl` file. 3. Run `python generate_datasets.py` to generate the dataset repos. 4. `cd ..` 5. `huggingface-cli repo create --type dataset --organization mozilla-foundation common_voice_11_0` 6. `git clone https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0` 7. `cd common_voice_11_0` 8. `cp ../common_voice_generator/common_voice_11_0/* ./` 9. `git add . && git commit -m "Release" && git push`
din0s/bart-large-asqa-cb
din0s
2022-10-12T09:27:35Z
6
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-12T08:51:33Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-large-asqa-cb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-asqa-cb This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4791 - Rougelsum: 38.2862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 3.347 | 1.0 | 545 | 2.5353 | 37.3812 | | 2.7829 | 2.0 | 1090 | 2.5087 | 37.6431 | | 2.6973 | 3.0 | 1635 | 2.4906 | 37.9194 | | 2.6125 | 4.0 | 2180 | 2.4812 | 38.1180 | | 2.5697 | 5.0 | 2725 | 2.4762 | 38.1616 | | 2.5086 | 6.0 | 3270 | 2.4773 | 38.1370 | | 2.4678 | 7.0 | 3815 | 2.4831 | 37.9346 | | 2.4404 | 8.0 | 4360 | 2.4896 | 38.1150 | | 2.3866 | 9.0 | 4905 | 2.4775 | 38.2222 | | 2.3791 | 10.0 | 5450 | 2.4791 | 38.2862 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
troesy/SpanBERT-hatexplain-label-all-tokens-False-3epoch
troesy
2022-10-12T09:19:02Z
130
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-12T08:34:25Z
--- tags: - generated_from_trainer model-index: - name: SpanBERT-hatexplain-label-all-tokens-False-3epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpanBERT-hatexplain-label-all-tokens-False-3epoch This model is a fine-tuned version of [SpanBERT/spanbert-large-cased](https://huggingface.co/SpanBERT/spanbert-large-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 174 | 0.1810 | | No log | 2.0 | 348 | 0.1657 | | 0.1781 | 3.0 | 522 | 0.1749 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
osanseviero/q-FrozenLake-v1-4x4-noSlippery-works
osanseviero
2022-10-12T07:57:20Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-12T07:57:11Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery-works results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="osanseviero/q-FrozenLake-v1-4x4-noSlippery-works", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
kiddothe2b/longformer-base-4096
kiddothe2b
2022-10-12T07:51:14Z
104
0
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "long-documents", "en", "dataset:c4", "arxiv:2004.05150", "arxiv:2210.05529", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-10T15:10:09Z
--- license: cc-by-sa-4.0 pipeline_tag: fill-mask arxiv: 2210.05529 language: en tags: - long-documents datasets: - c4 model-index: - name: kiddothe2b/longformer-base-4096 results: [] --- # Longformer / longformer-base-4096 ## Model description [Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. This version of Longformer presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529). The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), and continued pre-trained for MLM in long sequences following the paradigm of original Longformer released by Beltagy et al. (2020). It supports sequences of length up to 4,096. Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=longformer) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline mlm_model = pipeline('fill-mask', model='kiddothe2b/longformer-base-4096', trust_remote_code=True) mlm_model("Hello I'm a <mask> model.") ``` You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks: ```python from transformers import AutoTokenizer, AutoModelforSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/longformer-base-4096", trust_remote_code=True) doc_classifier = AutoModelforSequenceClassification("kiddothe2b/longformer-base-4096", trust_remote_code=True) ``` ## Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training procedure ### Training and evaluation data The model has been warm-started from [roberta-base](https://huggingface.co/roberta-base) checkpoint and has been continued pre-trained for additional 50k steps in long sequences (> 1024 subwords) of [C4](https://huggingface.co/datasets/c4) (Raffel et al., 2020). ### Training hyperparameters TThe following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 50000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.7067 | 0.2 | 10000 | 1.5923 | 0.6714 | | 1.6532 | 0.4 | 20000 | 1.5494 | 0.6784 | | 1.622 | 0.6 | 30000 | 1.5208 | 0.6830 | | 1.588 | 0.8 | 40000 | 1.4880 | 0.6876 | | 1.5682 | 1.0 | 50000 | 1.4680 | 0.6908 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.6 ## Citing If you use HAT in your research, please cite: [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint). ``` @misc{chalkidis-etal-2022-hat, url = {https://arxiv.org/abs/2210.05529}, author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond}, title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification}, publisher = {arXiv}, year = {2022}, } ``` Also cite the original work: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150). ``` @article{Beltagy2020Longformer, title={Longformer: The Long-Document Transformer}, author={Iz Beltagy and Matthew E. Peters and Arman Cohan}, journal={arXiv:2004.05150}, year={2020}, } ```
kiddothe2b/hierarchical-transformer-LC1-mini-1024
kiddothe2b
2022-10-12T07:46:58Z
103
0
transformers
[ "transformers", "pytorch", "hierarchical-transformer", "fill-mask", "long-documents", "custom_code", "en", "dataset:wikipedia", "arxiv:2210.05529", "license:cc-by-sa-4.0", "autotrain_compatible", "region:us" ]
fill-mask
2022-10-11T09:02:48Z
--- license: cc-by-sa-4.0 pipeline_tag: fill-mask language: en arxiv: 2210.05529 tags: - long-documents datasets: - wikipedia model-index: - name: kiddothe2b/hierarchical-transformer-LC1-mini-1024 results: [] --- # Hierarchical Attention Transformer (HAT) / hierarchical-transformer-LC1-mini-1024 ## Model description This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529). The model has been warm-started re-using the weights of miniature BERT (Turc et al., 2019), and continued pre-trained for MLM following the paradigm of Longformer released by Beltagy et al. (2020). It supports sequences of length up to 1,024. HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?other=hierarchical-transformer) to look for other versions of HAT, or fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering. ## How to use You can use this model directly for masked language modeling: ```python from transformers import AutoTokenizer, AutoModelforForMaskedLM tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True) mlm_model = AutoModelforForMaskedLM("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True) ``` You can also fine-tun it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks: ```python from transformers import AutoTokenizer, AutoModelforSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True) doc_classifier = AutoModelforSequenceClassification("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True) ``` ## Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. ## Training procedure ### Training and evaluation data The model has been warm-started from [google/bert_uncased_L-6_H-256_A-4](https://huggingface.co/google/bert_uncased_L-6_H-256_A-4) checkpoint and has been continued pre-trained for additional 50k steps on English [Wikipedia](https://huggingface.co/datasets/wikipedia). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: tpu - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 50000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3959 | 0.2 | 10000 | 2.2258 | | 2.3395 | 0.4 | 20000 | 2.1738 | | 2.3082 | 0.6 | 30000 | 2.1404 | | 2.273 | 0.8 | 40000 | 2.1145 | | 2.262 | 1.14 | 50000 | 2.1004 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6 ## Citing If you use HAT in your research, please cite: [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint). ``` @misc{chalkidis-etal-2022-hat, url = {https://arxiv.org/abs/2210.05529}, author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond}, title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification}, publisher = {arXiv}, year = {2022}, } ```
MuhammadIqbalBazmi/wav2vec2-conformer-rel-pos-large-960h-ft-intent-classification-ori
MuhammadIqbalBazmi
2022-10-12T07:04:07Z
133
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2-conformer", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-10-12T06:23:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-conformer-rel-pos-large-960h-ft-intent-classification-ori results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-conformer-rel-pos-large-960h-ft-intent-classification-ori This model is a fine-tuned version of [facebook/wav2vec2-conformer-rel-pos-large-960h-ft](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large-960h-ft) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2518 - Accuracy: 0.5833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2018 | 1.0 | 28 | 2.1963 | 0.125 | | 2.1871 | 2.0 | 56 | 2.1715 | 0.3333 | | 2.1499 | 3.0 | 84 | 2.1349 | 0.3333 | | 2.1236 | 4.0 | 112 | 2.0749 | 0.3333 | | 2.0814 | 5.0 | 140 | 2.0232 | 0.3333 | | 2.0905 | 6.0 | 168 | 1.9028 | 0.375 | | 1.9167 | 7.0 | 196 | 1.8469 | 0.3958 | | 1.7048 | 8.0 | 224 | 1.6481 | 0.4583 | | 1.4723 | 9.0 | 252 | 1.5350 | 0.4583 | | 1.5265 | 10.0 | 280 | 1.4526 | 0.5 | | 1.2621 | 11.0 | 308 | 1.4451 | 0.4583 | | 1.5083 | 12.0 | 336 | 1.3296 | 0.4792 | | 1.1857 | 13.0 | 364 | 1.2983 | 0.4792 | | 1.3449 | 14.0 | 392 | 1.3026 | 0.4792 | | 1.2061 | 15.0 | 420 | 1.3181 | 0.4792 | | 1.2544 | 16.0 | 448 | 1.2603 | 0.4792 | | 1.0731 | 17.0 | 476 | 1.2607 | 0.4792 | | 0.8836 | 18.0 | 504 | 1.2644 | 0.4792 | | 1.0917 | 19.0 | 532 | 1.2345 | 0.4792 | | 1.0786 | 20.0 | 560 | 1.2791 | 0.4792 | | 1.1616 | 21.0 | 588 | 1.2238 | 0.4792 | | 1.0614 | 22.0 | 616 | 1.2305 | 0.4583 | | 0.9617 | 23.0 | 644 | 1.2315 | 0.4792 | | 0.9652 | 24.0 | 672 | 1.2931 | 0.4792 | | 0.9042 | 25.0 | 700 | 1.1246 | 0.5 | | 1.0865 | 26.0 | 728 | 1.1490 | 0.4792 | | 0.9653 | 27.0 | 756 | 1.1713 | 0.5 | | 0.858 | 28.0 | 784 | 1.1726 | 0.5208 | | 0.8364 | 29.0 | 812 | 1.2142 | 0.5 | | 0.6798 | 30.0 | 840 | 1.2163 | 0.5208 | | 0.9284 | 31.0 | 868 | 1.1398 | 0.4792 | | 0.7383 | 32.0 | 896 | 1.2418 | 0.5208 | | 0.651 | 33.0 | 924 | 1.1734 | 0.5 | | 0.7416 | 34.0 | 952 | 1.2285 | 0.5 | | 0.6287 | 35.0 | 980 | 1.1467 | 0.5833 | | 0.6806 | 36.0 | 1008 | 1.1589 | 0.5625 | | 0.6148 | 37.0 | 1036 | 1.1373 | 0.5833 | | 0.7174 | 38.0 | 1064 | 1.2118 | 0.5625 | | 0.6056 | 39.0 | 1092 | 1.2205 | 0.5833 | | 0.7041 | 40.0 | 1120 | 1.2408 | 0.5833 | | 0.631 | 41.0 | 1148 | 1.2350 | 0.5833 | | 0.6028 | 42.0 | 1176 | 1.2787 | 0.5833 | | 0.5942 | 43.0 | 1204 | 1.2463 | 0.5833 | | 0.5441 | 44.0 | 1232 | 1.2496 | 0.5833 | | 0.5042 | 45.0 | 1260 | 1.2518 | 0.5833 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Souvik123/donut-base-sroie
Souvik123
2022-10-12T07:03:42Z
46
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2022-10-12T06:33:17Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
huggingtweets/deepleffen-the_dealersh1p
huggingtweets
2022-10-12T05:24:36Z
130
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-12T05:09:22Z
--- language: en thumbnail: http://www.huggingtweets.com/deepleffen-the_dealersh1p/1665552272191/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1211158441504456704/dCNSnY4k_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">『 』『dan』『 』 & Deep Leffen Bot</div> <div style="text-align: center; font-size: 14px;">@deepleffen-the_dealersh1p</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 『 』『dan』『 』 & Deep Leffen Bot. | Data | 『 』『dan』『 』 | Deep Leffen Bot | | --- | --- | --- | | Tweets downloaded | 2673 | 608 | | Retweets | 1336 | 14 | | Short tweets | 235 | 27 | | Tweets kept | 1102 | 567 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xu780cl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-the_dealersh1p's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3w2qdw30) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3w2qdw30/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/deepleffen-the_dealersh1p') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
MuhammadIqbalBazmi/wav2vec2-xls-r-300m-intent-classification-ori
MuhammadIqbalBazmi
2022-10-12T04:13:26Z
159
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-10-11T22:05:35Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-xls-r-300m-intent-classification-ori results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-intent-classification-ori This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3107 - Accuracy: 0.625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1982 | 1.0 | 14 | 2.1951 | 0.0625 | | 2.2021 | 2.0 | 28 | 2.1847 | 0.1458 | | 2.1819 | 3.0 | 42 | 2.1661 | 0.3333 | | 2.1789 | 4.0 | 56 | 2.1413 | 0.3333 | | 2.164 | 5.0 | 70 | 2.1183 | 0.3333 | | 2.1484 | 6.0 | 84 | 2.0974 | 0.3333 | | 2.1199 | 7.0 | 98 | 2.0939 | 0.3333 | | 2.1343 | 8.0 | 112 | 2.0829 | 0.3333 | | 2.1397 | 9.0 | 126 | 2.0654 | 0.3333 | | 2.1045 | 10.0 | 140 | 2.0553 | 0.3333 | | 2.1083 | 11.0 | 154 | 2.0255 | 0.3333 | | 2.0914 | 12.0 | 168 | 2.0065 | 0.3333 | | 2.0434 | 13.0 | 182 | 1.9696 | 0.3333 | | 2.0687 | 14.0 | 196 | 1.9231 | 0.4167 | | 2.0237 | 15.0 | 210 | 1.8679 | 0.4167 | | 1.9562 | 16.0 | 224 | 1.8184 | 0.4167 | | 2.0361 | 17.0 | 238 | 1.8803 | 0.3958 | | 1.888 | 18.0 | 252 | 1.7802 | 0.4167 | | 1.899 | 19.0 | 266 | 1.7662 | 0.4167 | | 1.8959 | 20.0 | 280 | 1.7076 | 0.4167 | | 1.8368 | 21.0 | 294 | 1.6566 | 0.4375 | | 1.7358 | 22.0 | 308 | 1.6283 | 0.5 | | 1.7877 | 23.0 | 322 | 1.6411 | 0.4583 | | 1.7311 | 24.0 | 336 | 1.5525 | 0.5208 | | 1.7079 | 25.0 | 350 | 1.5163 | 0.5 | | 1.6496 | 26.0 | 364 | 1.5458 | 0.5 | | 1.6374 | 27.0 | 378 | 1.5211 | 0.5 | | 1.6048 | 28.0 | 392 | 1.4533 | 0.5417 | | 1.5927 | 29.0 | 406 | 1.4319 | 0.5 | | 1.4987 | 30.0 | 420 | 1.4579 | 0.5208 | | 1.5745 | 31.0 | 434 | 1.4167 | 0.6042 | | 1.4632 | 32.0 | 448 | 1.4471 | 0.5417 | | 1.4686 | 33.0 | 462 | 1.4116 | 0.5625 | | 1.5368 | 34.0 | 476 | 1.3872 | 0.6042 | | 1.4327 | 35.0 | 490 | 1.3491 | 0.5833 | | 1.3978 | 36.0 | 504 | 1.3325 | 0.5833 | | 1.4509 | 37.0 | 518 | 1.3236 | 0.6042 | | 1.3881 | 38.0 | 532 | 1.3426 | 0.5833 | | 1.39 | 39.0 | 546 | 1.3137 | 0.6042 | | 1.4153 | 40.0 | 560 | 1.3123 | 0.625 | | 1.3635 | 41.0 | 574 | 1.3224 | 0.6042 | | 1.403 | 42.0 | 588 | 1.3111 | 0.6042 | | 1.3763 | 43.0 | 602 | 1.3197 | 0.5833 | | 1.3539 | 44.0 | 616 | 1.3077 | 0.6042 | | 1.306 | 45.0 | 630 | 1.3107 | 0.625 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
matnun/segformer-b0-finetuned-segments-sidewalk-2
matnun
2022-10-12T03:58:43Z
161
0
transformers
[ "transformers", "pytorch", "segformer", "vision", "image-segmentation", "generated_from_trainer", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2022-10-12T03:37:11Z
--- license: other tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-sidewalk-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk-2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 1.9042 - Mean Iou: 0.1600 - Mean Accuracy: 0.1997 - Overall Accuracy: 0.7338 - Per Category Iou: [nan, 0.27359520957005035, 0.6563592089876799, 0.0, 0.23344374046535918, 0.0, nan, 0.0, 0.0, 0.0, 0.5539341917024321, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6213519498256361, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.8012808797206368, 0.0, 0.8609473035107046, nan, 0.0, 0.0, 0.0] - Per Category Accuracy: [nan, 0.38598740280061317, 0.9344800917343116, 0.0, 0.23402267811135147, 0.0, nan, 0.0, 0.0, 0.0, 0.6574569071869553, nan, nan, nan, nan, 0.0, 0.0, nan, 0.889953470705536, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9339123774958169, 0.0, 0.9562267789312698, nan, 0.0, 0.0, 0.0] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 2.8419 | 0.42 | 20 | 3.2243 | 0.1239 | 0.1973 | 0.6992 | [0.0, 0.221283072298205, 0.6482498250140304, 0.0, 0.36607695456244177, 0.013827775204570018, nan, 1.0254201659129828e-05, 0.0, 0.0, 0.5416500682753081, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.5339731316050166, 0.0, 0.0006440571922786744, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7498440701547007, 0.0, 0.7659222854515146, 0.0, 0.0, 0.0, 0.0] | [nan, 0.3346613609105567, 0.8582083544770268, 0.0, 0.5101472837243907, 0.015482685970504024, nan, 1.0366454154356502e-05, 0.0, 0.0, 0.6745826026281508, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8093545247364923, 0.0, 0.0006458279514337381, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9324806212895075, 0.0, 0.797418357423677, nan, 0.0, 0.0, 0.0] | | 2.3662 | 0.83 | 40 | 2.5147 | 0.1402 | 0.1798 | 0.6989 | [nan, 0.19549119549985344, 0.6036027201962391, 0.0, 0.0019222772099991463, 0.000300503343099692, nan, 0.0, 0.0, 0.0, 0.47853978429259575, nan, nan, nan, nan, 0.0, 0.0, nan, 0.5820555774612892, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.7898452112422248, 0.0, 0.8521568687502872, nan, 0.0, 0.0, 0.0] | [nan, 0.25107981668136076, 0.9396577375184628, 0.0, 0.0019233683746435017, 0.0003025228242666523, nan, 0.0, 0.0, 0.0, 0.5513810659584686, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8953553793561865, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9300976130892274, 0.0, 0.9250758451014455, nan, 0.0, 0.0, 0.0] | | 2.1745 | 1.25 | 60 | 2.0428 | 0.1485 | 0.1882 | 0.7162 | [nan, 0.24240648716131, 0.6262941164542789, 0.0, 0.04440846090507781, 0.0, nan, 0.0, 0.0, 0.0, 0.522913696330921, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6194890050543631, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.7947837731119848, 0.0, 0.8609570537373858, nan, 0.0, 0.0, 0.0] | [nan, 0.3318909301752965, 0.9392945927202885, 0.0, 0.04443587164684973, 0.0, nan, 0.0, 0.0, 0.0, 0.6149676720993105, nan, nan, nan, nan, 0.0, 0.0, nan, 0.8836542113759377, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9409947331534898, 0.0, 0.9509521157666382, nan, 0.0, 0.0, 0.0] | | 1.986 | 1.67 | 80 | 1.9042 | 0.1600 | 0.1997 | 0.7338 | [nan, 0.27359520957005035, 0.6563592089876799, 0.0, 0.23344374046535918, 0.0, nan, 0.0, 0.0, 0.0, 0.5539341917024321, nan, nan, nan, nan, 0.0, 0.0, nan, 0.6213519498256361, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.8012808797206368, 0.0, 0.8609473035107046, nan, 0.0, 0.0, 0.0] | [nan, 0.38598740280061317, 0.9344800917343116, 0.0, 0.23402267811135147, 0.0, nan, 0.0, 0.0, 0.0, 0.6574569071869553, nan, nan, nan, nan, 0.0, 0.0, nan, 0.889953470705536, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.9339123774958169, 0.0, 0.9562267789312698, nan, 0.0, 0.0, 0.0] | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
eunyounglee/mBART_translator_kobart
eunyounglee
2022-10-12T02:53:22Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-11T02:59:35Z
--- license: mit tags: - generated_from_trainer metrics: - bleu model-index: - name: mBART_translator_kobart results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART_translator_kobart This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7003 - Bleu: 45.4811 - Gen Len: 19.9289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 2.9126 | 1.0 | 1912 | 1.3966 | 36.2687 | 19.9289 | | 1.6918 | 2.0 | 3824 | 0.8254 | 43.7633 | 19.9289 | | 1.3387 | 3.0 | 5736 | 0.7003 | 45.4811 | 19.9289 | ### Framework versions - Transformers 4.23.0 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
UIC-Liu-Lab/CPT
UIC-Liu-Lab
2022-10-12T02:53:11Z
82
3
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "custom_code", "en", "arxiv:2210.05549", "autotrain_compatible", "region:us" ]
text-classification
2022-10-11T11:22:40Z
--- language: en --- # CPT This repository contains the code and pre-trained models for our EMNLP'22 paper [Continual Training of Language Models for Few-Shot Learning](https://arxiv.org/abs/2210.05549) by <a href="https://vincent950129.github.io/"> Zixuan Ke</a>, <a href="https://linhaowei1.github.io/">Haowei Lin</a>, <a href="https://shaoyijia.github.io/">Yijia Shao</a>, <a href="https://howardhsu.github.io/">Hu Xu</a>, <a href="https://leishu02.github.io/">Lei Shu</a>, and <a href="https://www.cs.uic.edu/~liub/">Bing Liu</a>. ## Requirements First, install PyTorch by following the instructions from [the official website](https://pytorch.org). To faithfully reproduce our results, please use the correct `1.7.0` version corresponding to your platforms/CUDA versions. PyTorch version higher than `1.7.0` should also work. For example, if you use Linux and **CUDA11** ([how to check CUDA version](https://varhowto.com/check-cuda-version/)), install PyTorch by the following command, ```bash pip install torch==1.7.0+cu110 -f https://download.pytorch.org/whl/torch_stable.html ``` If you instead use **CUDA** `<11` or **CPU**, install PyTorch by the following command, ```bash pip install torch==1.7.0 ``` Then run the following script to install the remaining dependencies, ```bash pip install -r requirements.txt ``` **Attention**: Our model is based on `transformers==4.11.3` and `adapter-transformers==2.2.0`. Using them from other versions may cause some unexpected bugs. ## Use CPT with Huggingface You can easily import our continually post-trained model with HuggingFace's `transformers`: ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification # Import our model. The package will take care of downloading the models automatically tokenizer = AutoTokenizer.from_pretrained("roberta-base") model = AutoModelForSequenceClassification.from_pretrained("UIC-Liu-Lab/CPT", trust_remote_code=True) # Tokenize input texts texts = [ "There's a kid on a skateboard.", "A kid is skateboarding.", "A kid is inside the house." ] inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt") # Task id and smax t = torch.LongTensor([0]).to(model.device) # using task 0's CL-plugin, choose from {0, 1, 2, 3} smax = 400 # Get the model output! res = model(**inputs, return_dict=True, t=t, s=smax) ``` If you encounter any problem when directly loading the models by HuggingFace's API, you can also download the models manually from the [repo](https://huggingface.co/UIC-Liu-Lab/CPT/tree/main) and use `model = AutoModel.from_pretrained({PATH TO THE DOWNLOAD MODEL})`. Note: The post-trained weights you load contain un-trained classification heads. The post-training sequence is `Restaurant -> AI -> ACL -> AGNews`, you can use the downloaded weights to fine-tune the corresponding end-task. The results (MF1/Acc) will be consistent with follows. | | Restaurant | AI | ACL | AGNews | Avg. | | --------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | UIC-Liu-Lab/CPT | 53.90 / 75.13 | 30.42 / 30.89 | 37.56 / 38.53 | 63.77 / 65.79 | 46.41 / 52.59 | ## Citation Please cite our paper if you use CPT in your work: ```bibtex @inproceedings{ke2022continual, title={Continual Training of Language Models for Few-Shot Learning}, author={Ke, Zixuan and Lin, Haowei and Shao, Yijia and Xu, Hu and Shu, Lei, and Liu, Bing}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2022} } ```