modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39
ali2066
2022-03-01T14:32:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6169 - Precision: 0.0031 - Recall: 0.0357 - F1: 0.0057 - Accuracy: 0.6464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.6339 | 0.0116 | 0.0120 | 0.0118 | 0.6662 | | No log | 2.0 | 20 | 0.6182 | 0.0064 | 0.0120 | 0.0084 | 0.6688 | | No log | 3.0 | 30 | 0.6139 | 0.0029 | 0.0241 | 0.0052 | 0.6659 | | No log | 4.0 | 40 | 0.6172 | 0.0020 | 0.0241 | 0.0037 | 0.6622 | | No log | 5.0 | 50 | 0.6165 | 0.0019 | 0.0241 | 0.0036 | 0.6599 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
ali2066
2022-03-01T14:20:06Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1832 - Precision: 0.6138 - Recall: 0.7169 - F1: 0.6613 - Accuracy: 0.9332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.2740 | 0.4554 | 0.5460 | 0.4966 | 0.8943 | | No log | 2.0 | 22 | 0.2189 | 0.5470 | 0.6558 | 0.5965 | 0.9193 | | No log | 3.0 | 33 | 0.2039 | 0.5256 | 0.6706 | 0.5893 | 0.9198 | | No log | 4.0 | 44 | 0.2097 | 0.5401 | 0.6795 | 0.6018 | 0.9237 | | No log | 5.0 | 55 | 0.2255 | 0.6117 | 0.6825 | 0.6452 | 0.9223 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39
ali2066
2022-03-01T14:05:57Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2903 - Precision: 0.2440 - Recall: 0.4465 - F1: 0.3155 - Accuracy: 0.8706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4378 | 0.0463 | 0.1136 | 0.0658 | 0.7742 | | No log | 2.0 | 60 | 0.3739 | 0.1472 | 0.3756 | 0.2115 | 0.8284 | | No log | 3.0 | 90 | 0.3422 | 0.1865 | 0.4330 | 0.2607 | 0.8374 | | No log | 4.0 | 120 | 0.3286 | 0.2243 | 0.4833 | 0.3064 | 0.8438 | | No log | 5.0 | 150 | 0.3239 | 0.2356 | 0.4809 | 0.3163 | 0.8490 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35
ali2066
2022-03-01T14:02:32Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_00_35 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1155 - Precision: 0.5720 - Recall: 0.4705 - F1: 0.5163 - Accuracy: 0.9687 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.1256 | 0.04 | 0.0021 | 0.0039 | 0.9624 | | No log | 2.0 | 30 | 0.0963 | 0.7121 | 0.5711 | 0.6339 | 0.9794 | | No log | 3.0 | 45 | 0.0844 | 0.6205 | 0.5732 | 0.5959 | 0.9778 | | No log | 4.0 | 60 | 0.0770 | 0.6201 | 0.5856 | 0.6023 | 0.9778 | | No log | 5.0 | 75 | 0.0750 | 0.6174 | 0.5856 | 0.6011 | 0.9777 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_58_58
ali2066
2022-03-01T14:00:30Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_58_58 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_58_58 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2698 - Precision: 0.3554 - Recall: 0.4884 - F1: 0.4114 - Accuracy: 0.8973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.4423 | 0.0261 | 0.0184 | 0.0216 | 0.7728 | | No log | 2.0 | 22 | 0.3220 | 0.1256 | 0.3129 | 0.1793 | 0.8735 | | No log | 3.0 | 33 | 0.2561 | 0.2633 | 0.4264 | 0.3255 | 0.9103 | | No log | 4.0 | 44 | 0.2535 | 0.3303 | 0.4509 | 0.3813 | 0.9115 | | No log | 5.0 | 55 | 0.2414 | 0.3696 | 0.4693 | 0.4135 | 0.9181 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_57_21
ali2066
2022-03-01T13:58:54Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_57_21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_57_21 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5905 - Precision: 0.0024 - Recall: 0.0143 - F1: 0.0041 - Accuracy: 0.6867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.6081 | 0.0 | 0.0 | 0.0 | 0.6904 | | No log | 2.0 | 20 | 0.6014 | 0.0025 | 0.0130 | 0.0042 | 0.6934 | | No log | 3.0 | 30 | 0.5953 | 0.0 | 0.0 | 0.0 | 0.6930 | | No log | 4.0 | 40 | 0.5858 | 0.0 | 0.0 | 0.0 | 0.6941 | | No log | 5.0 | 50 | 0.5815 | 0.0 | 0.0 | 0.0 | 0.6947 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21
ali2066
2022-03-01T13:44:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-14_43_21 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1212 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 15 | 0.1113 | 0.0 | 0.0 | 0.0 | 0.9752 | | No log | 2.0 | 30 | 0.1069 | 0.0 | 0.0 | 0.0 | 0.9752 | | No log | 3.0 | 45 | 0.0992 | 0.0 | 0.0 | 0.0 | 0.9752 | | No log | 4.0 | 60 | 0.0938 | 0.0 | 0.0 | 0.0 | 0.9752 | | No log | 5.0 | 75 | 0.0920 | 0.0 | 0.0 | 0.0 | 0.9752 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35
ali2066
2022-03-01T13:39:36Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3190 - Precision: 0.1194 - Recall: 0.2563 - F1: 0.1629 - Accuracy: 0.8546 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4963 | 0.0223 | 0.0562 | 0.0319 | 0.7461 | | No log | 2.0 | 60 | 0.4089 | 0.0617 | 0.1359 | 0.0849 | 0.8093 | | No log | 3.0 | 90 | 0.3919 | 0.1053 | 0.2101 | 0.1403 | 0.8219 | | No log | 4.0 | 120 | 0.3787 | 0.1202 | 0.2482 | 0.1619 | 0.8270 | | No log | 5.0 | 150 | 0.3745 | 0.1171 | 0.2391 | 0.1572 | 0.8311 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
coastalcph/fairlex-fscs-minilm
coastalcph
2022-03-01T13:36:58Z
14
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "legal", "fairlex", "de", "fr", "it", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - de - fr - it pipeline_tag: fill-mask license: cc-by-nc-sa-4.0 tags: - legal - fairlex widget: - text: "Aus seinem damaligen strafbaren Verhalten resultierte eine Forderung der Nachlassverwaltung eines <mask>, worüber eine aussergerichtliche Vereinbarung über Fr. 500'000." - text: " Elle avait pour but social les <mask> dans le domaine des changes, en particulier l'exploitation d'une plateforme internet." - text: "Il Pretore ha accolto la petizione con sentenza 16 luglio 2015, accordando all'attore l'importo <mask>, con interessi di mora a partire dalla notifica del precetto esecutivo, e ha rigettato in tale misura l'opposizione interposta a quest'ultimo." --- # FairLex: A multilingual benchmark for evaluating fairness in legal text processing We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. --- Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland. --- ## Pre-training details For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC). We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads. We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019). For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC). ## Models list | Model name | Training corpora | Language | |-----------------------------------|------------------|--------------------| | `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` | | `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` | | `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] | | `coastalcph/fairlex-cail-minlm` | CAIL | `zh` | ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-fscs-minlm") model = AutoModel.from_pretrained("coastalcph/fairlex-fscs-minlm") ``` ## Evaluation on downstream tasks Consider the experiments in the article: _Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._ ## Author - Publication ``` @inproceedings{chalkidis-2022-fairlex, author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders}, title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics}, year={2022}, address={Dublin, Ireland} } ``` Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
ali2066/distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
ali2066
2022-03-01T13:33:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2572 - Precision: 0.3363 - Recall: 0.5110 - F1: 0.4057 - Accuracy: 0.8931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3976 | 0.1405 | 0.3058 | 0.1925 | 0.7921 | | No log | 2.0 | 60 | 0.3511 | 0.2360 | 0.4038 | 0.2979 | 0.8260 | | No log | 3.0 | 90 | 0.3595 | 0.1863 | 0.3827 | 0.2506 | 0.8211 | | No log | 4.0 | 120 | 0.3591 | 0.2144 | 0.4288 | 0.2859 | 0.8299 | | No log | 5.0 | 150 | 0.3605 | 0.1989 | 0.4212 | 0.2702 | 0.8343 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
coastalcph/fairlex-scotus-minilm
coastalcph
2022-03-01T13:24:01Z
12
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "legal", "fairlex", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en pipeline_tag: fill-mask license: cc-by-nc-sa-4.0 tags: - legal - fairlex widget: - text: "Because the Court granted <mask> before judgment, the Court effectively stands in the shoes of the Court of Appeals and reviews the defendants’ appeals." --- # FairLex: A multilingual benchmark for evaluating fairness in legal text processing We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. --- Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland. --- ## Pre-training details For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC). We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads. We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019). For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC). ## Models list | Model name | Training corpora | Language | |-----------------------------------|------------------|--------------------| | `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` | | `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` | | `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] | | `coastalcph/fairlex-cail-minlm` | CAIL | `zh` | ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-scotus-minlm") model = AutoModel.from_pretrained("coastalcph/fairlex-scotus-minlm") ``` ## Evaluation on downstream tasks Consider the experiments in the article: _Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._ ## Author - Publication ``` @inproceedings{chalkidis-2022-fairlex, author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders}, title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics}, year={2022}, address={Dublin, Ireland} } ``` Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
nickmuchi/vit-finetuned-cats-dogs
nickmuchi
2022-03-01T13:15:13Z
132
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy widget: - src: https://cdn.pixabay.com/photo/2021/09/19/12/19/animal-6637774_1280.jpg example_title: Dog - src: https://cdn.pixabay.com/photo/2017/02/20/18/03/cat-2083492_1280.jpg example_title: Cat model-index: - name: vit-finetuned-cats-dogs results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9971014261245728 --- # vit-finetuned-cats-dogs Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### cat ![cat](images/cat.jpg) #### dog ![dog](images/dog.jpg)
coastalcph/fairlex-cail-minilm
coastalcph
2022-03-01T13:12:22Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "legal", "fairlex", "zh", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: zh pipeline_tag: fill-mask license: cc-by-nc-sa-4.0 tags: - legal - fairlex widget: - text: "上述事实,被告人在庭审过程中亦无异议,且有<mask>的陈述,现场辨认笔录及照片,被告人的前科刑事判决书,释放证明材料,抓获经过,被告人的供述及身份证明等证据证实,足以认定。" --- # FairLex: A multilingual benchmark for evaluating fairness in legal text processing We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. --- Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland. --- ## Pre-training details For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC). We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads. We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019). For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC). ## Models list | Model name | Training corpora | Language | |-----------------------------------|------------------|--------------------| | `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` | | `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` | | `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] | | `coastalcph/fairlex-cail-minlm` | CAIL | `zh` | ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-cail-minlm") model = AutoModel.from_pretrained("coastalcph/fairlex-cail-minlm") ``` ## Evaluation on downstream tasks Consider the experiments in the article: _Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._ ## Author - Publication ``` @inproceedings{chalkidis-2022-fairlex, author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders}, title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics}, year={2022}, address={Dublin, Ireland} } ``` Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
ali2066
2022-03-01T12:20:35Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7224 - Accuracy: 0.6979 - F1: 0.4736 - Precision: 0.5074 - Recall: 0.4440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 95 | 0.6009 | 0.65 | 0.2222 | 0.625 | 0.1351 | | No log | 2.0 | 190 | 0.6140 | 0.675 | 0.3689 | 0.6552 | 0.2568 | | No log | 3.0 | 285 | 0.6580 | 0.67 | 0.4590 | 0.5833 | 0.3784 | | No log | 4.0 | 380 | 0.7560 | 0.665 | 0.4806 | 0.5636 | 0.4189 | | No log | 5.0 | 475 | 0.8226 | 0.665 | 0.464 | 0.5686 | 0.3919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
asini/wav2vec2-timit-demo
asini
2022-03-01T10:37:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-timit-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-timit-demo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4847 - Wer: 0.3462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.487 | 4.0 | 500 | 1.3466 | 1.0153 | | 0.6134 | 8.0 | 1000 | 0.4807 | 0.4538 | | 0.2214 | 12.0 | 1500 | 0.4684 | 0.3984 | | 0.1233 | 16.0 | 2000 | 0.5070 | 0.3779 | | 0.0847 | 20.0 | 2500 | 0.4965 | 0.3705 | | 0.0611 | 24.0 | 3000 | 0.4881 | 0.3535 | | 0.0464 | 28.0 | 3500 | 0.4847 | 0.3462 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
Akash7897/bert-base-cased-wikitext2
Akash7897
2022-03-01T10:29:35Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.8544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.0915 | 1.0 | 2346 | 7.0517 | | 6.905 | 2.0 | 4692 | 6.8735 | | 6.8565 | 3.0 | 7038 | 6.8924 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
huggingtweets/berniesanders-coffee__burger
huggingtweets
2022-03-01T10:09:58Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger & Bernie Sanders</div> <div style="text-align: center; font-size: 14px;">@berniesanders-coffee__burger</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Coffee Burger & Bernie Sanders. | Data | Coffee Burger | Bernie Sanders | | --- | --- | --- | | Tweets downloaded | 2471 | 3250 | | Retweets | 525 | 429 | | Short tweets | 337 | 10 | | Tweets kept | 1609 | 2811 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ltwd1tj1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-coffee__burger's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/121buw7a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/121buw7a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/berniesanders-coffee__burger') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/berniesanders-coffee__burger-sensanders
huggingtweets
2022-03-01T09:49:43Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/794725967948181506/Zn4x_F6i_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/794619281271033856/Fs0QQaH7_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Coffee Burger & Bernie Sanders & Bernie Sanders</div> <div style="text-align: center; font-size: 14px;">@berniesanders-coffee__burger-sensanders</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Coffee Burger & Bernie Sanders & Bernie Sanders. | Data | Coffee Burger | Bernie Sanders | Bernie Sanders | | --- | --- | --- | --- | | Tweets downloaded | 2471 | 3249 | 3250 | | Retweets | 525 | 296 | 429 | | Short tweets | 337 | 5 | 10 | | Tweets kept | 1609 | 2948 | 2811 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2k4t7tx8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-coffee__burger-sensanders's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31ey7s5h) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31ey7s5h/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/berniesanders-coffee__burger-sensanders') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/berniesanders-cnn-dril
huggingtweets
2022-03-01T09:43:27Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/berniesanders-cnn-dril/1646127802129/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bernie Sanders & wint & CNN</div> <div style="text-align: center; font-size: 14px;">@berniesanders-cnn-dril</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Bernie Sanders & wint & CNN. | Data | Bernie Sanders | wint | CNN | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3229 | 3250 | | Retweets | 429 | 473 | 30 | | Short tweets | 10 | 300 | 6 | | Tweets kept | 2811 | 2456 | 3214 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yapgpjj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-cnn-dril's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/berniesanders-cnn-dril') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
inovex/multi2convai-logistics-hr-bert
inovex
2022-03-01T09:22:15Z
7
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "hr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "gdje mogu staviti paket?" license: mit language: hr --- # Multi2ConvAI-Logistics: finetuned Bert for Croatian This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Croatian (hr) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-hr-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-hr-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-corona-de-bert
inovex
2022-03-01T09:18:20Z
4
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification - pytorch - transformers widget: - text: "Muss ich eine Maske tragen?" license: mit language: de --- # Multi2ConvAI-Corona: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
hfl/chinese-roberta-wwm-ext-large
hfl
2022-03-01T09:15:16Z
5,610
196
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - zh tags: - bert license: "apache-2.0" --- # Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
hfl/chinese-roberta-wwm-ext
hfl
2022-03-01T09:13:56Z
279,957
306
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - zh tags: - bert license: "apache-2.0" --- # Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
nguyenvulebinh/spoken-norm-taggen
nguyenvulebinh
2022-03-01T09:10:45Z
2
1
transformers
[ "transformers", "pytorch", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: cc-by-nc-4.0 ---
inovex/multi2convai-quality-en-mbert
inovex
2022-03-01T09:01:15Z
5
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Start the program" license: mit language: en --- # Multi2ConvAI-Quality: finetuned MBert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-en-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-en-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-en-bert
inovex
2022-03-01T09:00:55Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Start the program" license: mit language: en --- # Multi2ConvAI-Quality: finetuned Bert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-en-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-en-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-de-mbert
inovex
2022-03-01T09:00:39Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Starte das Programm" license: mit language: de --- # Multi2ConvAI-Quality: finetuned MBert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-de-bert
inovex
2022-03-01T09:00:15Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Starte das Programm" license: mit language: de --- # Multi2ConvAI-Quality: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
gzomer/clip-multilingual
gzomer
2022-03-01T08:50:45Z
0
0
null
[ "clip", "vision", "text", "multilingual", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - clip - vision - text language: multilingual license: mit --- # MultiLingual CLIP Multilingual CLIP is a pre-trained model which can be used for multilingual semantic search and zero-shot image classification in 100 languages. # Model Architecture Multilingual CLIP was built using [OpenAI CLIP](https://github.com/openai/CLIP) model. I have used the same Vision encoder (ResNet 50x4), but instead I replaced their text encoder (Transformer) with a Mulilingual Text Encoder ([XLM-Roberta](https://huggingface.co/xlm-roberta-large)) and a configurable number of projection heads, as seen below: ![Model Architecture](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/858/046/datas/gallery.jpg) The model was trained in a distributed fashion on 16 Habana Gaudi Accelerators and with mixed Precision in two phases (using COCO Dataset for phase 1 and Google Conceptual Captions for phase 2). The training pipeline was built using PyTorch, PyTorch Lightning, and Distributed Data Parallel. # Datasets Three datasets have been used for building the model. COCO captions was used for training phase 1 and Google Conceptual Captions was used for training phase 2. Unsplash dataset was used for testing and inference. ## COCO Captions COCO (Common Objects in Context) is a large-scale object detection, segmentation, and captioning dataset. The COCO captions dataset has around ~85000 images and captions pairs. Run the following to download the dataset: ```bash ./download_coco.sh ``` This dataset was used for the first pre-training phase. ## Google Conceptual Captions Conceptual Captions is a dataset consisting of ~3.3 million images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. Download the datasets urls/captions from [here](https://storage.cloud.google.com/gcc-data/Train/GCC-training.tsv?_ga=2.191230122.-1896153081.1529438250) as save it to `datasets/googlecc/googlecc.tsv`. The full dataset has over 3 million images, but you can select a subset by loading the `googlecc.tsv` file and saving only the number of rows you want (I have used 1 million images for training). Then run the following commands to download each image on the `googlecc.tsv` file: ```bash npm install node download_build_googlecc.js ``` This dataset was used for the second pre-training phase. ## Unplash This dataset was used as the test set during inference. Run `python3.8 download_unsplash.py` to download the dataset. # Training ![Training phase 1](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/858/047/datas/gallery.jpg) ![Training phase 2](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/858/048/datas/gallery.jpg) ## Setup Create two Habana instances ([AWS EC2 DL1](https://aws.amazon.com/ec2/instance-types/dl1/)) using [Habana® Deep Learning Base AMI (Ubuntu 20.04)](https://aws.amazon.com/marketplace/pp/prodview-fw46rwuxrtfse) Create the PyTorch docker container running: ```bash docker run --name pytorch -td --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.2.0/ubuntu20.04/habanalabs/pytorch-installer-1.10.0:1.2.0-585 ``` Enter the docker image by running: ``` docker exec -it pytorch /bin/bash ``` #### Setup password-less ssh between all connected servers 1. Configure password-less ssh between all nodes: Do the following in all the nodes' docker sessions: ```bash mkdir ~/.ssh cd ~/.ssh ssh-keygen -t rsa -b 4096 ``` Copy id_rsa.pub contents from every node's docker to every other node's docker's ~/.ssh/authorized_keys (all public keys need to be in all hosts' authorized_keys): ```bash cat id_rsa.pub > authorized_keys vi authorized_keys ``` Copy the contents from inside to other systems. Paste all hosts' public keys in all hosts' “authorized_keys” file. 2. On each system: Add all hosts (including itself) to known_hosts. The IP addresses used below are just for illustration: ```bash ssh-keyscan -p 3022 -H $IP1 >> ~/.ssh/known_hosts ssh-keyscan -p 3022 -H $IP2 >> ~/.ssh/known_hosts ``` 3. Change Docker SSH port to 3022 ```bash sed -i 's/#Port 22/Port 3022/g' /etc/ssh/sshd_config sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config service ssh restart ``` [Allow all TCP](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) traffic between the nodes on AWS Clone the git repo: ```bash git clone https://github.com/gzomer/clip-multilingual ``` Create environment: ```bash python3.8 -m venv .env ``` Install requirements: ```bash python3.8 -r requirements.txt ``` Activate environment ```bash source .env/bin/activate ``` ## Training params Learning rate: 1e-3 Batch size: 64 Phase 1 - Epochs: 100 Phase 2 - Epochs: 15 ## Train script arguments ``` --dataset-num-workers Number of workers (default: 8) --dataset-type Dataset type (coco or googlecc) (default: coco) --dataset-dir Dataset dir (default: ./datasets/coco/) --dataset-subset-size Load only a subset of the dataset (useful for debugging) --dataset-train-split Dataset train split (default: 0.8) --train-device Type of device to use (default: hpu) --distributed-num-nodes Number of nodes (machines) (default: 2) --distributed-parallel-devices Number of parallel devices per node (default: 8) --distributed-master-address Master node IP address --distributed-master-port Master node port (default: 12345) --distributed-bucket-cap-mb DDP bucket cap MB (default: 200) --checkpoint-dir Model checkpoint dir (default: ./models) --checkpoint-save-every-n Save every n epochs (default: 1) --checkpoint-load-vision-path Load vision encoder checkpoint --checkpoint-load-text-path Load text encoder checkpoint --model-visual-name Which visual model to use (default: RN50x4) --model-textual-name Which textual model to use (default: xlm-roberta-base) --hyperparam-num-layers Number of layers (default: 3) --hyperparam-lr Model learning rate (default: 0.001) --hyperparam-epochs Max epochs (default: 100) --hyperparam-precision Precision (default: 16) --hyperparam-batch-size Batch size (default: 64) --wandb-project W&B project name (default: clip) --wandb-enabled W&B is enabled? (default: True) ``` ## Habana Gaudi - 8 accelerators ### Phase 1 training ```bash python3.8 train.py --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 1 ``` ### Phase 2 training ```bash python3.8 train.py --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 1 --hyperparam-epochs 15 --checkpoint-load-text-path /home/models/text-last.ckpt --checkpoint-load-vision-path /home/models/vision-last.ckpt --checkpoint-dir ./models_phase2 ``` ## Habana Gaudi - 16 accelerators (multi-server training) Change the master IP address based on your instances (use local IP, not public IP). ### Phase 1 training ```bash NODE_RANK=0 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 ``` ```bash NODE_RANK=1 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 ``` ### Phase 2 training ```bash NODE_RANK=0 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 --hyperparam-epochs 10 --checkpoint-load-text-path /home/models/text-last.ckpt --checkpoint-load-vision-path /home/models/vision-last.ckpt --checkpoint-dir ./models_phase2 ``` ```bash NODE_RANK=1 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 --hyperparam-epochs 15 --checkpoint-load-text-path /home/models/text-last.ckpt --checkpoint-load-vision-path /home/models/vision-last.ckpt --checkpoint-dir ./models_phase2 ``` ## Other devices If you don't have access to a Habana Gaudi accelerator yet, you can also train on CPU/GPU, although it will be way slower. To train on CPU, just pass `--train-device=cpu` and on GPU `--train-device=cuda` to the `train.py` script. # Inference ## Loading pre-trained model from Hugging Face HUB ```python from models import create_and_load_from_hub model = create_and_load_from_hub() ``` ## Loading model from local checkpoint ```python from models import MultiLingualCLIP, load_model text_checkpoint_path = '/path/to/text model checkpoint' vision_checkpoint_path = '/path/to/vision model checkpoint' model = MultiLingualCLIP(num_layers=3) load_model(model, vision_checkpoint_path, text_checkpoint_path) ``` ## Generate embeddings Run the following (after downloading Unplash dataset): `python3.8 ./generate_embeddings.py` ## Searching images ```python import numpy as np from search import MultiLingualSearch images_embeddings = np.load('/path/to/images_embeddings') images_data = [...] # List of image info for each row of the embeddings. For instance, it could be a list of urls, filepaths, ids. They will be returned when calling the search function semantic_search = MultiLingualSearch(model, images_embeddings, images_data) results = semantic_search.search('विद्यालय में') # Means at school print(results) ``` ```json [{"image": "https://images.unsplash.com/photo-1557804506-669a67965ba0?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHwxM3x8bWVldGluZ3N8ZW58MHx8fHwxNjQ1NjA2MjQz&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.2461608648300171}, {"image": "https://images.unsplash.com/photo-1558403194-611308249627?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHwyMXx8cGVvcGxlJTIwd29ya2luZ3xlbnwwfHx8fDE2NDU2MDMyMjE&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.16881239414215088}, {"image": "https://images.unsplash.com/photo-1531497865144-0464ef8fb9a9?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHw4Nnx8cGVvcGxlJTIwd29ya2luZ3xlbnwwfHx8fDE2NDU2MDY5ODc&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.14744874835014343}, {"image": "https://images.unsplash.com/photo-1561089489-f13d5e730d72?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHw5MHx8ZWR1Y2F0aW9ufGVufDB8fHx8MTY0NTYwNjk1Nw&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.095176100730896}, {"image": "https://images.unsplash.com/photo-1580582932707-520aed937b7b?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHwxMnx8ZWR1Y2F0aW9ufGVufDB8fHx8MTY0NTYwMzIwMA&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.05218643322587013}] ```
aasem/wav2vec2-xls-r-300m-Urdu
aasem
2022-03-01T08:28:25Z
5
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- datasets: - common_voice: ~ language: - ur: ~ library_name: transformers: ~ license: mit: ~ metrics: - wer: ~ model-index: - name: wav2vec2-xls-r-300m-Urdu: ~ results: - task: dataset: args: ur: ~ name: : "common_voice" : ~ type: common_voice: ~ metrics: - type: wer: ~ value: 0.2459: ~ - type: cer: ~ value: 0.0691: ~ type: automatic-speech-recognition: ~ tags: - audio: ~ - automatic-speech-recognition: ~ - speech: ~ Finetuning of [Facebook's 300M model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Common Voice 8.0 Urdu dataset
huggingtweets/_deep_winter_
huggingtweets
2022-03-01T07:42:37Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/_deep_winter_/1646120552069/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1344880990464991239/DJ6glcyj_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">erin.</div> <div style="text-align: center; font-size: 14px;">@_deep_winter_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from erin.. | Data | erin. | | --- | --- | | Tweets downloaded | 3147 | | Retweets | 716 | | Short tweets | 243 | | Tweets kept | 2188 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bgxbc1v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_deep_winter_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dlbw7vo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dlbw7vo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_deep_winter_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
ali2066
2022-03-01T04:37:52Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4208 - Accuracy: 0.8283 - F1: 0.8915 - Precision: 0.8487 - Recall: 0.9389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 | | 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 | | 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 | | 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 | | 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
ali2066
2022-03-01T03:51:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2899 - Precision: 0.3170 - Recall: 0.5261 - F1: 0.3956 - Accuracy: 0.8799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 | | No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 | | No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 | | No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 | | No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10
ali2066
2022-03-01T03:43:42Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2741 - Precision: 0.1936 - Recall: 0.3243 - F1: 0.2424 - Accuracy: 0.8764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3235 | 0.1062 | 0.2076 | 0.1405 | 0.8556 | | No log | 2.0 | 60 | 0.2713 | 0.1710 | 0.3080 | 0.2199 | 0.8872 | | No log | 3.0 | 90 | 0.3246 | 0.2010 | 0.3391 | 0.2524 | 0.8334 | | No log | 4.0 | 120 | 0.3008 | 0.2011 | 0.3685 | 0.2602 | 0.8459 | | No log | 5.0 | 150 | 0.2714 | 0.1780 | 0.3772 | 0.2418 | 0.8661 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
armageddon/albert-squad-v2-covid-qa-deepset
armageddon
2022-03-01T02:04:26Z
28
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "question-answering", "generated_from_trainer", "dataset:covid_qa_deepset", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - covid_qa_deepset model-index: - name: covid_qa_analysis_albert_base_squad_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # covid_qa_analysis_albert_base_squad_v2 This model is a fine-tuned version of [abhilash1910/albert-squad-v2](https://huggingface.co/abhilash1910/albert-squad-v2) on the covid_qa_deepset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.6
armageddon/roberta-large-squad2-covid-qa-deepset
armageddon
2022-03-01T01:48:21Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:covid_qa_deepset", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - covid_qa_deepset model-index: - name: covid_qa_analysis_roberta-large-squad2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # covid_qa_analysis_roberta-large-squad2 This model is a fine-tuned version of [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) on the covid_qa_deepset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
azizbarank/mbert-finetuned-azerbaijani-ner
azizbarank
2022-03-01T00:58:02Z
22
3
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: mbert-finetuned-azerbaijani-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: az metrics: - name: Precision type: precision value: 0.8898541731306236 - name: Recall type: recall value: 0.915416533673795 - name: F1 type: f1 value: 0.9024543738200126 - name: Accuracy type: accuracy value: 0.966948310139165 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-finetuned-azerbaijani-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1385 - Precision: 0.8899 - Recall: 0.9154 - F1: 0.9025 - Accuracy: 0.9669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2928 | 1.0 | 625 | 0.1415 | 0.8584 | 0.8918 | 0.8748 | 0.9595 | | 0.1254 | 2.0 | 1250 | 0.1335 | 0.8875 | 0.9119 | 0.8996 | 0.9637 | | 0.077 | 3.0 | 1875 | 0.1385 | 0.8899 | 0.9154 | 0.9025 | 0.9669 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Msp/classifier
Msp
2022-02-28T22:02:26Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: apache-2.0 ---
PhilSad/GPTJ2B-SCP
PhilSad
2022-02-28T20:45:35Z
8
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
GPT J 6B finetuned on SCP articles Very experimental
Kevincp560/bart-large-cnn-finetuned-pubmed
Kevincp560
2022-02-28T19:04:22Z
5
2
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:pub_med_summarization_dataset", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer datasets: - pub_med_summarization_dataset metrics: - rouge model-index: - name: bart-large-cnn-finetuned-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pub_med_summarization_dataset type: pub_med_summarization_dataset args: document metrics: - name: Rouge1 type: rouge value: 40.4866 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.8416 - Rouge1: 40.4866 - Rouge2: 16.7472 - Rougel: 24.9831 - Rougelsum: 36.4002 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.932 | 1.0 | 4000 | 1.8110 | 38.1151 | 15.2255 | 23.4286 | 34.2521 | 141.8905 | | 1.7001 | 2.0 | 8000 | 1.7790 | 39.8217 | 16.3042 | 24.649 | 35.831 | 142.0 | | 1.5 | 3.0 | 12000 | 1.7971 | 40.6108 | 17.0446 | 25.1977 | 36.5556 | 141.9865 | | 1.3316 | 4.0 | 16000 | 1.8106 | 40.0466 | 16.4851 | 24.7094 | 36.0998 | 141.9335 | | 1.1996 | 5.0 | 20000 | 1.8416 | 40.4866 | 16.7472 | 24.9831 | 36.4002 | 142.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
peterhsu/mt5-small-finetuned-amazon-en-es
peterhsu
2022-02-28T18:40:06Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0255 - Rouge1: 17.5202 - Rouge2: 8.4634 - Rougel: 17.0175 - Rougelsum: 17.0528 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 8.094 | 1.0 | 1209 | 3.2933 | 12.7563 | 5.2606 | 12.4786 | 12.4961 | | 3.9263 | 2.0 | 2418 | 3.1487 | 16.2314 | 8.4716 | 15.6854 | 15.7506 | | 3.599 | 3.0 | 3627 | 3.0789 | 16.9233 | 8.1928 | 16.2596 | 16.2522 | | 3.429 | 4.0 | 4836 | 3.0492 | 17.2679 | 8.7561 | 16.6685 | 16.7399 | | 3.3279 | 5.0 | 6045 | 3.0384 | 17.6081 | 8.6721 | 17.0546 | 17.0368 | | 3.2518 | 6.0 | 7254 | 3.0343 | 17.2271 | 8.504 | 16.6285 | 16.6209 | | 3.2084 | 7.0 | 8463 | 3.0255 | 16.7859 | 8.054 | 16.2574 | 16.2853 | | 3.1839 | 8.0 | 9672 | 3.0255 | 17.5202 | 8.4634 | 17.0175 | 17.0528 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
akhaliq/YOLOP
akhaliq
2022-02-28T16:56:50Z
0
0
null
[ "object-detection", "arxiv:2108.11250", "arxiv:1612.07695", "arxiv:1606.02147", "region:us" ]
object-detection
2022-03-02T23:29:05Z
--- tags: - object-detection --- <div align="left"> ## You Only Look Once for Panoptic ​ Driving Perception > [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250) > > by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm) > > *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))* --- ### The Illustration of YOLOP ![yolop](pictures/yolop.png) ### Contributions * We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset. * We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization. ### Results #### Traffic Object Detection Result | Model | Recall(%) | mAP50(%) | Speed(fps) | | -------------- | --------- | -------- | ---------- | | `Multinet` | 81.3 | 60.2 | 8.6 | | `DLT-Net` | 89.4 | 68.4 | 9.3 | | `Faster R-CNN` | 77.2 | 55.6 | 5.3 | | `YOLOv5s` | 86.8 | 77.2 | 82 | | `YOLOP(ours)` | 89.2 | 76.5 | 41 | #### Drivable Area Segmentation Result | Model | mIOU(%) | Speed(fps) | | ------------- | ------- | ---------- | | `Multinet` | 71.6 | 8.6 | | `DLT-Net` | 71.3 | 9.3 | | `PSPNet` | 89.6 | 11.1 | | `YOLOP(ours)` | 91.5 | 41 | #### Lane Detection Result: | Model | mIOU(%) | IOU(%) | | ------------- | ------- | ------ | | `ENet` | 34.12 | 14.64 | | `SCNN` | 35.79 | 15.84 | | `ENet-SAD` | 36.56 | 16.02 | | `YOLOP(ours)` | 70.50 | 26.20 | #### Ablation Studies 1: End-to-end v.s. Step-by-step: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | | --------------- | --------- | ----- | ------- | ----------- | ------ | | `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 | | `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 | | `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 | | `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 | | `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | #### Ablation Studies 2: Multi-task v.s. Single task: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) | | --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- | | `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 | | `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 | | `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 | | `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 | **Notes**: - The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works. - In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others. --- ### Visualization #### Traffic Object Detection Result ![detect result](pictures/detect.png) #### Drivable Area Segmentation Result ![](pictures/da.png) #### Lane Detection Result ![](pictures/ll.png) **Notes**: - The visualization of lane detection result has been post processed by quadratic fitting. --- ### Project Structure ```python ├─inference │ ├─images # inference images │ ├─output # inference result ├─lib │ ├─config/default # configuration of training and validation │ ├─core │ │ ├─activations.py # activation function │ │ ├─evaluate.py # calculation of metric │ │ ├─function.py # training and validation of model │ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization │ │ ├─loss.py # loss function │ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper) │ ├─dataset │ │ ├─AutoDriveDataset.py # Superclass dataset,general function │ │ ├─bdd.py # Subclass dataset,specific function │ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper) │ │ ├─convect.py │ │ ├─DemoDataset.py # demo dataset(image, video and stream) │ ├─models │ │ ├─YOLOP.py # Setup and Configuration of model │ │ ├─light.py # Model lightweight(unrelated to paper, zwt) │ │ ├─commom.py # calculation module │ ├─utils │ │ ├─augmentations.py # data augumentation │ │ ├─autoanchor.py # auto anchor(k-means) │ │ ├─split_dataset.py # (Campus scene, unrelated to paper) │ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training │ ├─run │ │ ├─dataset/training time # Visualization, logging and model_save ├─tools │ │ ├─demo.py # demo(folder、camera) │ │ ├─test.py │ │ ├─train.py ├─toolkits │ │ ├─depoly # Deployment of model ├─weights # Pretraining model ``` --- ### Requirement This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+: ``` conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch ``` See `requirements.txt` for additional dependencies and version requirements. ```setup pip install -r requirements.txt ``` ### Data preparation #### Download - Download the images from [images](https://bdd-data.berkeley.edu/). - Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing). - Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing). - Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing). We recommend the dataset directory structure to be the following: ``` # The id represent the correspondence relation ├─dataset root │ ├─images │ │ ├─train │ │ ├─val │ ├─det_annotations │ │ ├─train │ │ ├─val │ ├─da_seg_annotations │ │ ├─train │ │ ├─val │ ├─ll_seg_annotations │ │ ├─train │ │ ├─val ``` Update the your dataset path in the `./lib/config/default.py`. ### Training You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size). If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end). ```python # Alternating optimization _C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs _C.TRAIN.DET_ONLY = False # Only train detection branch _C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs _C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch # Single task _C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task _C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task _C.TRAIN.DET_ONLY = False # Only train detection task ``` Start training: ```shell python tools/train.py ``` ### Evaluation You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms). Start evaluating: ```shell python tools/test.py --weights weights/End-to-end.pth ``` ### Demo Test We provide two testing method. #### Folder You can store the image or video in `--source`, and then save the reasoning result to `--save-dir` ```shell python tools/demo --source inference/images ``` #### Camera If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0). ```shell python tools/demo --source 0 ``` ### Deployment Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`. ## Citation If you find our paper and code useful for your research, please consider giving a star and citation: ```BibTeX @misc{2108.11250, Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang}, Title = {YOLOP: You Only Look Once for Panoptic Driving Perception}, Year = {2021}, Eprint = {arXiv:2108.11250}, } ```
Visual-Attention-Network/VAN-Base-original
Visual-Attention-Network
2022-02-28T16:34:32Z
0
0
null
[ "image-classification", "dataset:imagenet", "arxiv:2202.09741", "license:apache-2.0", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet --- # VAN-Base VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network). ## Description While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. ## Evaluation Results | Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download | | :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: | | VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) | | VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) | | VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), | | VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) | ### BibTeX entry and citation info ```bibtex @article{guo2022visual, title={Visual Attention Network}, author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min}, journal={arXiv preprint arXiv:2202.09741}, year={2022} } ```
mohamed-illiyas/wav2vec-malayalam
mohamed-illiyas
2022-02-28T16:07:13Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec-malayalam results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec-malayalam This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0a0+3fd9dcf - Datasets 1.18.3 - Tokenizers 0.10.3
mradau/stress_score
mradau
2022-02-28T15:34:22Z
5
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback model-index: - name: tmp10l_qol1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp10l_qol1 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.6
mradau/stress_classifier
mradau
2022-02-28T15:12:44Z
6
1
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback model-index: - name: tmpacdj0jf1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmpacdj0jf1 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.6
espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer
espnet
2022-02-28T14:51:38Z
2
0
espnet
[ "espnet", "tensorboard", "audio", "automatic-speech-recognition", "en", "dataset:sinhala", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - sinhala license: cc-by-4.0 --- ## ESPnet2 ASR pretrained model ### `espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer` This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/) ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
EMBEDDIA/litlat-bert
EMBEDDIA
2022-02-28T13:46:36Z
62
5
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "lt", "lv", "en", "multilingual", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - lt - lv - en - multilingual license: cc-by-sa-4.0 --- # LitLat BERT LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased), while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. ### Named entity recognition evaluation We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization. Language | mBERT | XLM-R | LVBERT | LitLat ---|---|---|---|--- Latvian | 0.830 | 0.865 | 0.797 | **0.881** Lithuanian | 0.797 | 0.817 | / | **0.850** English | 0.939 | 0.937 | / | **0.943**
inovex/multi2convai-quality-de-logreg-ft
inovex
2022-02-28T13:42:37Z
0
0
null
[ "text-classification", "de", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: de --- # Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-de-logreg-ft >>> Create pipeline for config: multi2convai-quality-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Starte das Programm >>> 'Starte das Programm' was classified as 'no.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Starte das Programm") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-logistics-de-logreg-ft
inovex
2022-02-28T12:31:23Z
0
0
null
[ "text-classification", "de", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: de --- # Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-logistics-de-logreg-ft >>> Create pipeline for config: multi2convai-logistics-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'logistics' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen? >>> 'Wo kann ich das Paket ablegen?' was classified as 'details.safeplace' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "logistics" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Wo kann ich das Paket ablegen?") label >>> Label(string='details.safeplace', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-corona-de-logreg-ft
inovex
2022-02-28T12:12:25Z
0
0
null
[ "text-classification", "de", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: de --- # Multi2ConvAI-Corona: German logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-corona-de-logreg-ft >>> Create pipeline for config: multi2convai-corona-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'corona' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen? >>> 'Muss ich eine Maske tragen?' was classified as 'corona.masks' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "corona" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Muss ich eine Maske tragen?") label >>> Label(string='corona.masks', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
peterhsu/marian-finetuned-kde4-en-to-zh_TW
peterhsu
2022-02-28T11:26:43Z
13
1
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-zh_TW results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-zh_TW metrics: - name: Bleu type: bleu value: 39.086345838465 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-zh_TW This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 1.0047 - Bleu: 39.0863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
spy24/autonlp-AUS-to-US-601516964
spy24
2022-02-28T11:21:11Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autonlp", "unk", "dataset:spy24/autonlp-data-AUS-to-US", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - spy24/autonlp-data-AUS-to-US co2_eq_emissions: 3.3930796843275846 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 601516964 - CO2 Emissions (in grams): 3.3930796843275846 ## Validation Metrics - Loss: 1.9823806285858154 - Rouge1: 42.8783 - Rouge2: 7.4603 - RougeL: 42.8492 - RougeLsum: 43.0556 - Gen Len: 2.8952 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-AUS-to-US-601516964 ```
NbAiLab/roberta_jan_128_scandinavian
NbAiLab
2022-02-28T11:01:33Z
50
0
transformers
[ "transformers", "jax", "tensorboard", "roberta", "fill-mask", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: cc-by-sa-4.0 ---
spy24/autonlp-UK-to-US-600416931
spy24
2022-02-28T09:59:04Z
3
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autonlp", "unk", "dataset:spy24/autonlp-data-UK-to-US", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - spy24/autonlp-data-UK-to-US co2_eq_emissions: 1.113131499202784 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 600416931 - CO2 Emissions (in grams): 1.113131499202784 ## Validation Metrics - Loss: 1.8278849124908447 - Rouge1: 45.7945 - Rouge2: 8.5245 - RougeL: 45.8031 - RougeLsum: 45.9067 - Gen Len: 3.0622 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-UK-to-US-600416931 ```
Theivaprakasham/layoutlmv2-finetuned-sroie_mod
Theivaprakasham
2022-02-28T09:50:47Z
7
1
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-sroie_mod results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-sroie_mod This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.8.0+cu101 - Datasets 1.18.3 - Tokenizers 0.11.0
FardinSaboori/bert-finetuned-squad
FardinSaboori
2022-02-28T06:22:27Z
27
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Kuray107/timit-5percent-supervised
Kuray107
2022-02-28T06:07:49Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: timit-5percent-supervised results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # timit-5percent-supervised This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6615 - Wer: 0.2788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 5.3773 | 33.33 | 500 | 2.9693 | 1.0 | | 1.4746 | 66.67 | 1000 | 0.5050 | 0.3359 | | 0.1067 | 100.0 | 1500 | 0.5981 | 0.3054 | | 0.0388 | 133.33 | 2000 | 0.6192 | 0.2712 | | 0.0244 | 166.67 | 2500 | 0.6392 | 0.2776 | | 0.018 | 200.0 | 3000 | 0.6615 | 0.2788 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
ali2066
2022-02-27T21:41:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6071 - Accuracy: 0.8337 - F1: 0.8922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3920 | 0.7988 | 0.8624 | | No log | 2.0 | 390 | 0.3873 | 0.8171 | 0.8739 | | 0.3673 | 3.0 | 585 | 0.4354 | 0.8256 | 0.8835 | | 0.3673 | 4.0 | 780 | 0.5358 | 0.8293 | 0.8887 | | 0.3673 | 5.0 | 975 | 0.5616 | 0.8366 | 0.8923 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
ali2066
2022-02-27T21:36:21Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3825 - Accuracy: 0.8144 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3975 | 0.8122 | 0.8795 | | No log | 2.0 | 390 | 0.4376 | 0.8085 | 0.8673 | | 0.3169 | 3.0 | 585 | 0.5736 | 0.8171 | 0.8790 | | 0.3169 | 4.0 | 780 | 0.8178 | 0.8098 | 0.8754 | | 0.3169 | 5.0 | 975 | 0.9244 | 0.8073 | 0.8738 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
osanseviero/xlm-roberta-base-finetuned-panx-de
osanseviero
2022-02-27T21:34:59Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8647022085959235 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1344 - F1: 0.8647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2568 | 1.0 | 525 | 0.1596 | 0.8210 | | 0.1279 | 2.0 | 1050 | 0.1368 | 0.8522 | | 0.0814 | 3.0 | 1575 | 0.1344 | 0.8647 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.18.0 - Tokenizers 0.10.3
cassandra-themis/test_tcp_ca
cassandra-themis
2022-02-27T20:08:32Z
5
0
transformers
[ "transformers", "pytorch", "camembert", "token-classification", "generated_from_trainer", "dataset:cassandra-themis/ner-tcp-ca", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - cassandra-themis/ner-tcp-ca model-index: - name: camembert-ner-tcp-ca results: [] widget: - text: "RÉPUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL D'AIX EN PROVENCE\n\n\n\n10e Chambre\n\n\n\nARRÊT MIXTE\n\nDU 14 JUIN 2006\n\n\n\nNo/2006\n\n\n\n\n\nRôle No 99/09967\n\n\n\n\n\nJohn X...\n\nArlette Y... épouse X...\n\nPatrick X...\n\n\n\n\n\nC/\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS\n\n\n\n\n\nDécision déférée à la Cour :\n\n\n\nDécision rendue le 20 Avril 1999 par la Commission d'Indemnisation des Victimes d'Infractions Pénales près le Tribunal de Grande Instance de MARSEILLE, enregistrée\n\nau répertoire général sous le no 98/00491.\n\n\n\n\n\nAPPELANTS\n\n\n\nMonsieur John X..., décédé\n\nné le 17 Mars 1973 à MARSEILLE (13000), demeurant ... - 13000 MARSEILLE\n\nreprésenté par la SCP COHEN - GUEDJ, avoués à la Cour\n\n\n\nMadame Arlette Y... épouse X...\n\nprise es qualité d'héritière de John X..., décédé le 25/11/2001\n\nnée le 18 Août 1951 à SAINT JEAN DE COLE (DORDOGNE), ... - 13012 MARSEILLE\n\nreprésentée par la SCP COHEN - GUEDJ, avoués à la Cour,\n\nassistée de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\nMonsieur Patrick X...\n\npris en sa qualité d'héritier de John X..., décédé le 25/11/2001\n\nné le 12 Juin 1951 à MARSEILLE (BOUCHES DU RHÔNE), demeurant ... - 13012 MARSEILLE\n\nreprésenté par la SCP COHEN - GUEDJ, avoués à la Cour,\n\nassisté de la SELARL BAFFERT - FRUCTUS ET ASSOCIES, avocats au barreau de MARSEILLE\n\n\n\n\n\nINTIME\n\n\n\nFONDS DE GARANTIE DES VICTIMES D'ACTES DE TERRORISME ET D'AUTRES INFRACTIONS article L 422.1 du Code des Assurances, géré par le Fonds de Garantie contre les Accidents de Circulation et de Chasse, dont le siège social est sis 64 rue Defrance 94300 VINCENNES, 39 bd Vincent Delpuech - les Bureaux du Méditerranée - 13255 MARSEILLE\n\nreprésenté par la SCP GIACOMETTI - DESOMBRE, avoués à la Cour,\n\nassisté de Me Alain TUILLIER, avocat au barreau d'AIX EN PROVENCE\n\n\n\n\n\nCOMPOSITION DE LA COUR\n\n\n\nL'affaire a été débattue le 12 Avril 2006 en audience publique. Conformément à l'article 785 du Nouveau Code de Procédure Civile, Mr RAJBAUT, Conseiller a fait un rapport oral de l'affaire à l'audience avant les plaidoiries.\n\n\n\nLa Cour était composée de :\n\n\n\nMadame Elisabeth VIEUX, Présidente\n\nMonsieur Benjamin RAJBAUT, Conseiller\n\nMadame Dominique KLOTZ, Conseiller\n\n\n\n\n\nqui en ont délibéré\n\n\n\nGreffier lors des débats : Madame Geneviève JAUFFRES.\n\n\n\nLes parties ont été avisées que le prononcé public de la décision aura lieu par mise à disposition au greffe le 14 Juin 2006..\n\n\n\nMINISTÈRE PUBLIC :\n\nAuquel l'affaire a été régulièrement communiquée.\n\n" example_title: "Exemple 1" - text: "RÉPUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nPhD / BLL\n\n\n\nNuméro / 06\n\n\n\nCOUR D'APPEL DE PAU\n\n2ème CH-Section 1\n\n\n\nARRÊT DU 19 janvier 2006\n\n\n\nDossier : 04 / 03078\n\n\n\nNature affaire :\n\n\n\nAutres demandes relatives à un bail d'habitation ou à un bail professionnel\n\n\n\nAffaire :\n\n\n\nBerthe X... épouse Y...\n\n\n\nC /\n\n\n\nDominique Z...,\n\nCorinne X...\n\n\n\nRÉPUBLIQUE FRANÇAISE\n\n\n\nAU NOM DU PEUPLE FRANÇAIS\n\n\n\nA R R Ê T\n\n\n\nprononcé par Monsieur GRANGER, conseiller,\n\nen vertu de l'article 452 du Nouveau Code de Procédure Civile,\n\n\n\nassisté de Monsieur LASBIATES, Greffier,\n\n\n\nà l'audience publique du 19 janvier 2006\n\ndate indiquée à l'issue des débats.\n\n\n\n* * * * *\n\n\n\nAPRES DÉBATS\n\n\n\nà l'audience publique tenue le 24 Novembre 2005, devant :\n\n\n\nMonsieur DARRACQ, magistrat chargé du rapport,\n\n\n\nassisté de Monsieur LASBIATES, greffier présent à l'appel des causes,\n\n\n\nMonsieur DARRACQ, en application des articles 786 et 910 du Nouveau Code de Procédure Civile et à défaut d'opposition a tenu l'audience pour entendre les plaidoiries et en a rendu compte à la Cour composée de :\n\n\n\nMonsieur PETRIAT, Conseiller faisant fonction de Président, par suite de l'empêchement légitime de tous les titulaires et des magistrats désignés par ordonnance et se trouvant le magistrat du siège présent le plus ancien dans l'ordre de nomination à la Cour\n\n\n\nMonsieur GRANGER, Conseiller\n\nMonsieur DARRACQ, Vice-Président placé, désigné par ordonnance du 12 septembre 2005\n\n\n\nqui en ont délibéré conformément à la loi.\n\n\n\ndans l'affaire opposant :\n\n\n\nAPPELANTE :\n\n\n\nMadame Berthe X... épouse Y...\n\nnée le 13 Juin 1942 à ARCANGUES (64)\n\nde nationalité française\n\n...\n\n...\n\n12500 ESPALION\n\n\n\nreprésentée par la S. C. P. LONGIN C. ET P., avoués à la Cour\n\nassistée de Maître BLAZY-ANDRIEU, avocat au barreau de BAYONNE\n\n\n\nINTIMES :\n\n\n\nMonsieur Dominique Camille Z...\n\nné le 13 juin 1954 à Chatou (78)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\nMadame Corinne X...\n\nnée le 3 juillet 1969 à Bidart (64)\n\n...\n\n...\n\n64200 BIARRITZ\n\n\n\n(bénéficient d'une aide juridictionnelle Totale numéro 2004 / 006320 du 24 / 02 / 2005 accordée par le bureau d'aide juridictionnelle de PAU)\n\n\n\nreprésentés par la S. C. P. F. PIAULT / M. LACRAMPE-CARRAZE, avoués à la Cour\n\nassistés de Maître FOURGEAU, avocat au barreau de BAYONNE\n\n\n\nsur appel de la décision\n\nen date du 24 AOUT 2004\n\nrendue par le TRIBUNAL D'INSTANCE DE BIARRITZ" example_title: "Exemple 2" - text: "RÉPUBLIQUE FRANCAISE\n\nAU NOM DU PEUPLE FRANCAIS\n\n\n\nCOUR D'APPEL DE DOUAI\n\n\n\nTROISIÈME CHAMBRE\n\n\n\nARRÊT DU 26 / 01 / 2006\n\n\n\nBAUX RURAUX\n\n\n\nNo RG : 05 / 04854 jonction avec dossier RG No 05 / 04858\n\n\n\nTribunal paritaire des baux ruraux d'AVESNES SUR HELPE\n\ndu 27 Juillet 2005 jugements no 99 / 000010 et 04 / 000006\n\n\n\nAPPELANTE\n\nMadame Marie-Noëlle X... épouse Y...\n\nDemeurant\n\n...\n\n59138 PONT SUR SAMBRE\n\n\n\nreprésentée par Me STERLILN de la SCP JP STERLIN-C STERLIN, avocats au barreau d'AMIENS\n\n\n\nINTIMÉS\n\nMonsieur Michel Z...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nreprésenté par Me VILLESECHE de la SCP ROFFIAEN-LE FUR-VILLESECHE, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMonsieur Avit X...\n\nDemeurant\n\n...\n\n59138 BACHANT\n\n\n\nreprésenté par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nMadame Marie-Christine X... épouse A...\n\nDemeurant\n\n...\n\n59750 FEIGNIES\n\n\n\nreprésentée par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Claire X... épouse B...\n\nDemeurant\n\n...\n\n59550 PRISCHES\n\n\n\nreprésentée par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\n\n\nMadame Marie-Antoinette X... épouse C...\n\nDemeurant\n\n...\n\n59440 ST AUBIN\n\n\n\nreprésentée par Me COLSON de la SCP CHABOT-COLSON, avocats au barreau d'AVESNES SUR HELPE\n\n\n\nCOMPOSITION DE LA COUR LORS DES DÉBATS ET DU DÉLIBÉRÉ\n\nMadame MERFELD, Président de chambre\n\nMadame CONVAIN, Conseiller\n\nMadame PAOLI, Conseiller\n\n---------------------\n\nGREFFIER LORS DES DÉBATS : Madame GAMEZ\n\n" example_title: "Exemple 3" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-ner-tcp-ca This model is a fine-tuned version of [cassandra-themis/camembert-base-juri](https://huggingface.co/cassandra-themis/camembert-base-juri) on the cassandra-themis/ner-tcp-ca full dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
ali2066
2022-02-27T18:46:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0926 - Accuracy: 0.9772 - F1: 0.9883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 | | No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 | | No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 | | 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
ali2066
2022-02-27T18:35:51Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3358 - Accuracy: 0.8688 - F1: 0.9225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 | | No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 | | No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 | | No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 | | No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
ali2066
2022-02-27T18:33:05Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.8609 - F1: 0.9156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 | | No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 | | No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 | | No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 | | No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
ali2066
2022-02-27T18:11:13Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4917 - Accuracy: 0.8231 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 | | No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 | | 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 | | 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 | | 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
bullmount/hseBert-it-cased
bullmount
2022-02-27T18:08:11Z
14
2
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: it license: mit widget: - text: "È stata pubblicata la [MASK] di conversione del D.L. 24 dicembre 2021 n. 221 ." - text: "La legge fornisce l’esatta [MASK] di Green pass base." - text: "Il datore di lavoro organizza e predispone i posti di lavoro di cui all'articolo 173, in [MASK] ai requisiti minimi di cui all'allegato XXXIV." - text: "Le principali novità riguardano la quarantena precauzionale e il [MASK] di autosorveglianza." --- # hseBERT **hseBert-it-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences. # Usage ```python from transformers import AutoModel, AutoTokenizer model_name = "bullmount/hseBert-it-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ```
ali2066/finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05
ali2066
2022-02-27T18:01:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
ali2066
2022-02-27T17:59:00Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09
ali2066
2022-02-27T17:56:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
ali2066
2022-02-27T17:51:50Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
ali2066
2022-02-27T17:46:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
ali2066
2022-02-27T17:40:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
ali2066
2022-02-27T17:12:30Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
ali2066
2022-02-27T17:01:16Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
ali2066
2022-02-27T16:44:27Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
ali2066
2022-02-27T16:38:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
Daryaflp/roberta-retrained_ru_covid
Daryaflp
2022-02-27T16:18:22Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer model-index: - name: roberta-retrained_ru_covid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-retrained_ru_covid This model is a fine-tuned version of [blinoff/roberta-base-russian-v0](https://huggingface.co/blinoff/roberta-base-russian-v0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8518 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
emilyalsentzer/Bio_Discharge_Summary_BERT
emilyalsentzer
2022-02-27T13:59:50Z
5,949
34
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "en", "arxiv:1904.03323", "arxiv:1901.08746", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "en" tags: - fill-mask license: mit --- # ClinicalBERT - Bio + Discharge Summary BERT Model The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries. This model card describes the Bio+Discharge Summary BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on only discharge summaries from MIMIC. ## Pretraining Data The `Bio_Discharge_Summary_BERT` model was trained on all discharge summaries from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words). ## Model Pretraining ### Note Preprocessing Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer). ### Pretraining Procedures The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`). ### Pretraining Hyperparameters We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15 and max predictions per sequence = 20). ## How to use the model Load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT") model = AutoModel.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT") ``` ## More Information Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks. ## Questions? Post a Github issue on the [clinicalBERT repo](https://github.com/EmilyAlsentzer/clinicalBERT) or email [email protected] with any questions.
facebook/wav2vec2-base-mt-voxpopuli-v2
facebook
2022-02-27T13:15:54Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "mt", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: mt tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **mt** on **9.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **mt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-lt-voxpopuli-v2
facebook
2022-02-27T13:15:36Z
22
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "lt", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: lt tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lt** on **14.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-lv-voxpopuli-v2
facebook
2022-02-27T13:15:26Z
6
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "lv", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: lv tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lv** on **13.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lv**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-hu-voxpopuli-v2
facebook
2022-02-27T13:15:17Z
10
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "hu", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: hu tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **hu** on **17.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **hu**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-fi-voxpopuli-v2
facebook
2022-02-27T13:15:08Z
6
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "fi", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fi tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **fi** on **14.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **fi**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-sl-voxpopuli-v2
facebook
2022-02-27T13:14:49Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "sl", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: sl tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sl** on **11.3k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-pl-voxpopuli-v2
facebook
2022-02-27T13:14:25Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "pl", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pl tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **pl** on **21.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **pl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-hr-voxpopuli-v2
facebook
2022-02-27T13:14:14Z
6
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "hr", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: hr tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **hr** on **8.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **hr**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-de-voxpopuli-v2
facebook
2022-02-27T13:13:15Z
7
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "de", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: de tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **de** on **23.2k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **de**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-ro-voxpopuli-v2
facebook
2022-02-27T13:12:40Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "ro", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ro tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **ro** on **17.9k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **ro**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-fr-voxpopuli-v2
facebook
2022-02-27T13:12:05Z
83
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "fr", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fr tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **fr** on **22.8k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **fr**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-es-voxpopuli-v2
facebook
2022-02-27T13:11:53Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "es", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: es tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **es** on **21.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **es**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-large-el-voxpopuli-v2
facebook
2022-02-27T12:48:30Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "el", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: el tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-large-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **el** on **17.7** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **el**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-large-uralic-voxpopuli-v2
facebook
2022-02-27T12:43:18Z
158
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: uralic tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-large-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **uralic** on **42.5** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **uralic**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
huggingartists/the-beatles
huggingartists
2022-02-27T11:47:43Z
7
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/the-beatles", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/the-beatles tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/c771d3ee1c0969503cdaf34edf76f38a.400x400x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Beatles</div> <a href="https://genius.com/artists/the-beatles"> <div style="text-align: center; font-size: 14px;">@the-beatles</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from The Beatles. Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-beatles). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/the-beatles") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2p2c5864/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Beatles's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/286vzjah) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/286vzjah/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/the-beatles') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-beatles") model = AutoModelWithLMHead.from_pretrained("huggingartists/the-beatles") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
nsi319/distilbert-base-uncased-finetuned-app
nsi319
2022-02-27T10:56:19Z
12
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "mobile app descriptions", "playstore", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" thumbnail: "https://huggingface.co/nsi319" tags: - distilbert - pytorch - text-classification - mobile app descriptions - playstore license: "mit" inference: true --- # Mobile App Classification ## Model description DistilBERT is a transformer model, smaller and faster than BERT, which was pre-trained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. The [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model is fine-tuned to classify an mobile app description into one of **6 play store categories**. Trained on 9000 samples of English App Descriptions and associated categories of apps available in [Google Play](https://play.google.com/store/apps). ## Fine-tuning The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best evaluation f1 score achieved by the model was 0.9034534096919489, found after 4 epochs. The accuracy of the model on the test set was 0.9033. ## How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("nsi319/distilbert-base-uncased-finetuned-app") model = AutoModelForSequenceClassification.from_pretrained("nsi319/distilbert-base-uncased-finetuned-app") classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) classifier("Disney+ has something for everyone and every mood, all in one place. With endless entertainment from Disney, Pixar, Marvel, Star Wars, National Geographic and Star, there's always something exciting to watch. Watch the latest releases, Original series and movies, classic films, throwbacks and so much more.") '''Output''' [{'label': 'Entertainment', 'score': 0.9014402031898499}] ``` ## Limitations Training data consists of apps from 6 play store categories namely Education, Entertainment, Productivity, Sports, News & Magazines and Photography.
nsi319/xlnet-base-cased-finetuned-app
nsi319
2022-02-27T10:52:49Z
8
0
transformers
[ "transformers", "pytorch", "xlnet", "text-classification", "mobile app descriptions", "playstore", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" thumbnail: "https://huggingface.co/nsi319" tags: - xlnet - pytorch - text-classification - mobile app descriptions - playstore license: "mit" inference: true --- # Mobile App Classification ## Model description XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. The [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) model is fine-tuned to classify an mobile app description into one of **6 play store categories**. Trained on 9000 samples of English App Descriptions and associated categories of apps available in [Google Play](https://play.google.com/store/apps). ## Fine-tuning The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best evaluation f1 score achieved by the model was 0.8951433611497919, found after 5 epochs. The accuracy of the model on the test set was 0.895. ## How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("nsi319/xlnet-base-cased-finetuned-app") model = AutoModelForSequenceClassification.from_pretrained("nsi319/xlnet-base-cased-finetuned-app") classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) classifier("The official Google Photos app is made for the way you take photos today and includes essential features like shared albums, automatic creations and an advanced editing suite. Additionally every Google Account comes with 15 GB of free storage and you can choose to automatically back up all your photos and videos in High quality or Original quality. You can then access them from any connected device and on photos.google.com.") '''Output''' [{'label': 'Photography', 'score': 0.998849630355835}] ``` ## Limitations Training data consists of apps from 6 play store categories namely Education, Entertainment, Productivity, Sports, News & Magazines and Photography.
bullmount/xlm-roberta-base-finetuned-panx-it
bullmount
2022-02-27T08:04:14Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit widget: - text: "Luigi è nato a Roma." - text: "Antonio ha chiesto ad Alessia di recarsi alla sede INAIL." --- tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.9097618003799502 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1417 - F1: 0.9098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2754 | 1.0 | 834 | 0.1683 | 0.8717 | | 0.1366 | 2.0 | 1668 | 0.1449 | 0.8921 | | 0.0863 | 3.0 | 2502 | 0.1417 | 0.9098 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Herais/pred_genre
Herais
2022-02-27T05:26:29Z
6
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "classification", "zh", "dataset:Custom", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - zh tags: - classification license: apache-2.0 datasets: - Custom metrics: - rouge --- This model predicts the time period given a synopsis of about 200 Chinese characters. The model is trained on TV and Movie datasets and takes simplified Chinese as input. We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint. #### Sample Usage from transformers import BertTokenizer, BertForSequenceClassification device = torch.device("cuda" if torch.cuda.is_available() else "cpu") checkpoint = "Herais/pred_genre" tokenizer = BertTokenizer.from_pretrained(checkpoint, problem_type="single_label_classification") model = BertForSequenceClassification.from_pretrained(checkpoint).to(device) label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0, '其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6, '科幻': 9, '神话': 8, '宫廷': 5} id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇', 2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打', 9: '科幻', 8: '神话', 5: '宫廷'} synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\ 他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\ 成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\ 为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\ 也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\ 继续为检察事业贡献自己的青春。 """ inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt') model.eval() outputs = model(**input) label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy() labels_pred = [id2label_timeperiod[label] for label in labels_pred] print(labels_pred) # ['涉案'] Citation TBA
Jackett/subject_classifier
Jackett
2022-02-27T04:57:39Z
5
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Label association {'Biology': 0, 'Physics': 1, 'Chemistry': 2, 'Maths': 3}
msintaha/bert-base-uncased-copa-kb-27
msintaha
2022-02-27T03:24:40Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "dataset:super_glue", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - super_glue metrics: - accuracy model-index: - name: bert-base-uncased-copa-kb-27 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-copa-kb-27 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6114 - Accuracy: 0.7100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6534 | 0.7400 | | No log | 2.0 | 80 | 0.6114 | 0.7100 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0