modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
histinct7002/distilbert-base-uncased-finetuned-cola
histinct7002
2022-02-07T06:18:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5290966132843783 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4600 - Matthews Correlation: 0.5291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5227 | 1.0 | 535 | 0.4715 | 0.4678 | | 0.3493 | 2.0 | 1070 | 0.4600 | 0.5291 | | 0.2393 | 3.0 | 1605 | 0.6018 | 0.5219 | | 0.1714 | 4.0 | 2140 | 0.7228 | 0.5245 | | 0.1289 | 5.0 | 2675 | 0.8154 | 0.5279 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.5.1 - Datasets 1.18.3 - Tokenizers 0.10.3
gagan3012/ViTGPT2_vizwiz
gagan3012
2022-02-07T05:54:26Z
31
1
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "image-to-text", "endpoints_compatible", "region:us" ]
image-to-text
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer - image-to-text model-index: - name: ViTGPT2_vizwiz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViTGPT2_vizwiz This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.1207 | 0.07 | 1000 | 0.0906 | | 0.0916 | 0.14 | 2000 | 0.0861 | | 0.0879 | 0.2 | 3000 | 0.0840 | | 0.0856 | 0.27 | 4000 | 0.0822 | | 0.0834 | 0.34 | 5000 | 0.0806 | | 0.0817 | 0.41 | 6000 | 0.0795 | | 0.0812 | 0.48 | 7000 | 0.0785 | | 0.0808 | 0.55 | 8000 | 0.0779 | | 0.0796 | 0.61 | 9000 | 0.0771 | | 0.0786 | 0.68 | 10000 | 0.0767 | | 0.0774 | 0.75 | 11000 | 0.0762 | | 0.0772 | 0.82 | 12000 | 0.0758 | | 0.0756 | 0.89 | 13000 | 0.0754 | | 0.0759 | 0.96 | 14000 | 0.0750 | | 0.0756 | 1.02 | 15000 | 0.0748 | | 0.0726 | 1.09 | 16000 | 0.0745 | | 0.0727 | 1.16 | 17000 | 0.0745 | | 0.0715 | 1.23 | 18000 | 0.0742 | | 0.0726 | 1.3 | 19000 | 0.0741 | | 0.072 | 1.37 | 20000 | 0.0738 | | 0.0723 | 1.43 | 21000 | 0.0735 | | 0.0715 | 1.5 | 22000 | 0.0734 | | 0.0724 | 1.57 | 23000 | 0.0732 | | 0.0723 | 1.64 | 24000 | 0.0730 | | 0.0718 | 1.71 | 25000 | 0.0729 | | 0.07 | 1.78 | 26000 | 0.0728 | | 0.0702 | 1.84 | 27000 | 0.0726 | | 0.0704 | 1.91 | 28000 | 0.0725 | | 0.0703 | 1.98 | 29000 | 0.0725 | | 0.0686 | 2.05 | 30000 | 0.0726 | | 0.0687 | 2.12 | 31000 | 0.0726 | | 0.0688 | 2.19 | 32000 | 0.0724 | | 0.0677 | 2.25 | 33000 | 0.0724 | | 0.0665 | 2.32 | 34000 | 0.0725 | | 0.0684 | 2.39 | 35000 | 0.0723 | | 0.0678 | 2.46 | 36000 | 0.0722 | | 0.0686 | 2.53 | 37000 | 0.0722 | | 0.067 | 2.59 | 38000 | 0.0721 | | 0.0669 | 2.66 | 39000 | 0.0721 | | 0.0673 | 2.73 | 40000 | 0.0721 | | 0.0673 | 2.8 | 41000 | 0.0720 | | 0.0662 | 2.87 | 42000 | 0.0720 | | 0.0681 | 2.94 | 43000 | 0.0719 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
GleamEyeBeast/Mandarin
GleamEyeBeast
2022-02-07T04:25:26Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: Mandarin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mandarin This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
JIWON/bert-base-finetuned-nli
JIWON
2022-02-07T00:29:00Z
11
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:klue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - klue metrics: - accuracy model-index: - name: bert-base-finetuned-nli results: - task: name: Text Classification type: text-classification dataset: name: klue type: klue args: nli metrics: - name: Accuracy type: accuracy value: 0.085 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-nli This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.6210 - Accuracy: 0.085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 196 | 0.6210 | 0.085 | | No log | 2.0 | 392 | 0.5421 | 0.0643 | | 0.5048 | 3.0 | 588 | 0.5523 | 0.062 | | 0.5048 | 4.0 | 784 | 0.5769 | 0.0533 | | 0.5048 | 5.0 | 980 | 0.5959 | 0.052 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
BigSalmon/Points
BigSalmon
2022-02-07T00:27:49Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Converting Points to Paragraphs Example Prompts: ``` ### - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership. ### - with 2,000,000 individual articles on everything - wikipedia is the #8 site on the world wide web - created by anyone with access to a computer - growing at fast rate - proof that collaborative community-based projects are the future Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future. ### - ```
StevenLimcorn/wav2vec2-xls-r-300m-zh-TW
StevenLimcorn
2022-02-06T21:57:14Z
26
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - zh-TW license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-TW dataset. It achieves the following results on the evaluation set: - Loss: 1.1786 - Wer: 0.8594 - Cer: 0.2964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 64.6189 | 2.51 | 500 | 63.8077 | 1.0 | 1.0 | | 8.0561 | 5.03 | 1000 | 6.8014 | 1.0 | 1.0 | | 6.0427 | 7.54 | 1500 | 6.0745 | 1.0 | 1.0 | | 5.9357 | 10.05 | 2000 | 5.8682 | 1.0 | 1.0 | | 5.0489 | 12.56 | 2500 | 4.4032 | 0.9990 | 0.7750 | | 4.6184 | 15.08 | 3000 | 3.8383 | 0.9983 | 0.6768 | | 4.365 | 17.59 | 3500 | 3.4633 | 0.9959 | 0.6299 | | 4.1026 | 20.1 | 4000 | 3.0732 | 0.9902 | 0.5814 | | 3.8655 | 22.61 | 4500 | 2.7638 | 0.9868 | 0.5465 | | 3.6991 | 25.13 | 5000 | 2.4759 | 0.9811 | 0.5088 | | 3.4894 | 27.64 | 5500 | 2.2937 | 0.9746 | 0.4852 | | 3.3983 | 30.15 | 6000 | 2.1684 | 0.9733 | 0.4674 | | 3.2736 | 32.66 | 6500 | 2.0372 | 0.9659 | 0.4458 | | 3.1884 | 35.18 | 7000 | 1.9267 | 0.9648 | 0.4329 | | 3.1248 | 37.69 | 7500 | 1.8408 | 0.9591 | 0.4217 | | 3.0381 | 40.2 | 8000 | 1.7531 | 0.9503 | 0.4074 | | 2.9515 | 42.71 | 8500 | 1.6880 | 0.9459 | 0.3967 | | 2.8704 | 45.23 | 9000 | 1.6264 | 0.9378 | 0.3884 | | 2.8128 | 47.74 | 9500 | 1.5621 | 0.9341 | 0.3782 | | 2.7386 | 50.25 | 10000 | 1.5011 | 0.9243 | 0.3664 | | 2.6646 | 52.76 | 10500 | 1.4608 | 0.9192 | 0.3575 | | 2.6072 | 55.28 | 11000 | 1.4251 | 0.9148 | 0.3501 | | 2.569 | 57.79 | 11500 | 1.3837 | 0.9060 | 0.3462 | | 2.5091 | 60.3 | 12000 | 1.3589 | 0.9070 | 0.3392 | | 2.4588 | 62.81 | 12500 | 1.3261 | 0.8966 | 0.3284 | | 2.4083 | 65.33 | 13000 | 1.3052 | 0.8982 | 0.3265 | | 2.3787 | 67.84 | 13500 | 1.2997 | 0.8908 | 0.3243 | | 2.3457 | 70.35 | 14000 | 1.2778 | 0.8898 | 0.3187 | | 2.3099 | 72.86 | 14500 | 1.2661 | 0.8830 | 0.3172 | | 2.2559 | 75.38 | 15000 | 1.2475 | 0.8851 | 0.3143 | | 2.2264 | 77.89 | 15500 | 1.2319 | 0.8739 | 0.3085 | | 2.196 | 80.4 | 16000 | 1.2218 | 0.8722 | 0.3049 | | 2.1613 | 82.91 | 16500 | 1.2093 | 0.8719 | 0.3051 | | 2.1455 | 85.43 | 17000 | 1.2055 | 0.8624 | 0.3005 | | 2.1193 | 87.94 | 17500 | 1.1975 | 0.8600 | 0.2982 | | 2.0911 | 90.45 | 18000 | 1.1960 | 0.8648 | 0.3003 | | 2.0884 | 92.96 | 18500 | 1.1871 | 0.8638 | 0.2971 | | 2.0766 | 95.48 | 19000 | 1.1814 | 0.8617 | 0.2967 | | 2.0735 | 97.99 | 19500 | 1.1801 | 0.8621 | 0.2969 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
preetham18/xls-r-hi-300m-8
preetham18
2022-02-06T20:40:28Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.5258 - Wer: 1.0073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.917 | 16.13 | 500 | 4.8963 | 1.0 | | 3.3585 | 32.25 | 1000 | 3.3069 | 1.0000 | | 1.5873 | 48.38 | 1500 | 0.8274 | 1.0061 | | 1.2654 | 64.51 | 2000 | 0.6250 | 1.0076 | | 1.0917 | 80.64 | 2500 | 0.5460 | 1.0056 | | 1.0001 | 96.76 | 3000 | 0.5304 | 1.0083 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
dark-knight/wav2vec2-base-timit-demo-colab
dark-knight
2022-02-06T16:25:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
anuragshas/wav2vec2-xls-r-300m-mr-cv8-with-lm
anuragshas
2022-02-06T16:11:16Z
29
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "mr", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - mr license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset. It achieves the following results on the evaluation set: - Loss: 0.6693 - Wer: 0.5921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 500.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 4.9504 | 18.18 | 400 | 4.6730 | 1.0 | | 3.3766 | 36.36 | 800 | 3.3464 | 1.0 | | 3.1128 | 54.55 | 1200 | 3.0177 | 0.9980 | | 1.7966 | 72.73 | 1600 | 0.8733 | 0.8039 | | 1.4085 | 90.91 | 2000 | 0.5555 | 0.6458 | | 1.1731 | 109.09 | 2400 | 0.4930 | 0.6438 | | 1.0271 | 127.27 | 2800 | 0.4780 | 0.6093 | | 0.9045 | 145.45 | 3200 | 0.4647 | 0.6578 | | 0.807 | 163.64 | 3600 | 0.4505 | 0.5925 | | 0.741 | 181.82 | 4000 | 0.4746 | 0.6025 | | 0.6706 | 200.0 | 4400 | 0.5004 | 0.5844 | | 0.6186 | 218.18 | 4800 | 0.4984 | 0.5997 | | 0.5508 | 236.36 | 5200 | 0.5298 | 0.5636 | | 0.5123 | 254.55 | 5600 | 0.5410 | 0.5110 | | 0.4623 | 272.73 | 6000 | 0.5591 | 0.5383 | | 0.4281 | 290.91 | 6400 | 0.5775 | 0.5600 | | 0.4045 | 309.09 | 6800 | 0.5924 | 0.5580 | | 0.3651 | 327.27 | 7200 | 0.5671 | 0.5684 | | 0.343 | 345.45 | 7600 | 0.6083 | 0.5945 | | 0.3085 | 363.64 | 8000 | 0.6243 | 0.5728 | | 0.2941 | 381.82 | 8400 | 0.6245 | 0.5580 | | 0.2735 | 400.0 | 8800 | 0.6458 | 0.5804 | | 0.262 | 418.18 | 9200 | 0.6566 | 0.5824 | | 0.2578 | 436.36 | 9600 | 0.6558 | 0.5965 | | 0.2388 | 454.55 | 10000 | 0.6598 | 0.5993 | | 0.2328 | 472.73 | 10400 | 0.6700 | 0.6041 | | 0.2286 | 490.91 | 10800 | 0.6684 | 0.5957 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
joykirat/bert-base-uncased-finetuned-swag
joykirat
2022-02-06T11:11:04Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.0
Jeevesh8/feather_berts
Jeevesh8
2022-02-06T04:53:08Z
0
0
null
[ "arxiv:1911.02969", "region:us" ]
null
2022-03-02T23:29:04Z
First 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10. Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``. For downloading next 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts1/).
Jeevesh8/feather_berts1
Jeevesh8
2022-02-06T04:52:40Z
0
0
null
[ "arxiv:1911.02969", "region:us" ]
null
2022-03-02T23:29:04Z
Second 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10. Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``. For downloading first 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts/).
am-shb/bert-base-multilingual-uncased-finetuned
am-shb
2022-02-06T00:05:59Z
5
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: '57463134' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 57463134 This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 16 - seed: 1337 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.11.2 - Pytorch 1.10.0 - Datasets 1.8.0 - Tokenizers 0.10.3
sunitha/Trial_3_Results
sunitha
2022-02-05T19:27:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: Trial_3_Results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Trial_3_Results This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
transformersbook/xlm-roberta-base-finetuned-panx-de
transformersbook
2022-02-05T17:07:41Z
9
2
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8645910410381922 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). It achieves the following results on the evaluation set: - Loss: 0.1388 - F1: 0.8646 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2652 | 1.0 | 525 | 0.1602 | 0.8230 | | 0.1314 | 2.0 | 1050 | 0.1372 | 0.8527 | | 0.0806 | 3.0 | 1575 | 0.1388 | 0.8646 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
transformersbook/xlm-roberta-base-finetuned-panx-en
transformersbook
2022-02-05T17:07:09Z
17
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.69816564758199 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb). It achieves the following results on the evaluation set: - Loss: 0.3676 - F1: 0.6982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.026 | 1.0 | 50 | 0.5734 | 0.4901 | | 0.4913 | 2.0 | 100 | 0.3870 | 0.6696 | | 0.3734 | 3.0 | 150 | 0.3676 | 0.6982 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
transformersbook/pegasus-samsum
transformersbook
2022-02-05T17:05:28Z
75,124
6
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum-test This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. The model is trained in Chapter 6: Summarization in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/06_summarization.ipynb). It achieves the following results on the evaluation set: - Loss: 1.4875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7012 | 0.54 | 500 | 1.4875 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
transformersbook/codeparrot-small-vocabulary
transformersbook
2022-02-05T17:00:28Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# CodeParrot This is a small version of the CodeParrot tokenizer trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The tokenizer is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
transformersbook/distilbert-base-uncased-distilled-clinc
transformersbook
2022-02-05T16:47:39Z
199
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9393548387096774 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned with knowledge distillation version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb). It achieves the following results on the evaluation set: - Loss: 0.1005 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9031 | 1.0 | 318 | 0.5745 | 0.7365 | | 0.4481 | 2.0 | 636 | 0.2856 | 0.8748 | | 0.2528 | 3.0 | 954 | 0.1798 | 0.9187 | | 0.176 | 4.0 | 1272 | 0.1398 | 0.9294 | | 0.1416 | 5.0 | 1590 | 0.1211 | 0.9348 | | 0.1243 | 6.0 | 1908 | 0.1116 | 0.9348 | | 0.1133 | 7.0 | 2226 | 0.1062 | 0.9377 | | 0.1075 | 8.0 | 2544 | 0.1035 | 0.9387 | | 0.1039 | 9.0 | 2862 | 0.1014 | 0.9381 | | 0.1018 | 10.0 | 3180 | 0.1005 | 0.9394 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.0 - Tokenizers 0.10.3
transformersbook/distilbert-base-uncased-finetuned-clinc
transformersbook
2022-02-05T16:46:21Z
100
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9174193548387096 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb). It achieves the following results on the evaluation set: - Loss: 0.7773 - Accuracy: 0.9174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2923 | 1.0 | 318 | 3.2893 | 0.7423 | | 2.6307 | 2.0 | 636 | 1.8837 | 0.8281 | | 1.5483 | 3.0 | 954 | 1.1583 | 0.8968 | | 1.0153 | 4.0 | 1272 | 0.8618 | 0.9094 | | 0.7958 | 5.0 | 1590 | 0.7773 | 0.9174 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.0 - Tokenizers 0.10.3
transformersbook/bert-base-uncased-finetuned-clinc
transformersbook
2022-02-05T16:38:54Z
922
3
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "arxiv:1909.02027", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Intent Detection with BERT This model was trained on the [CLINC150](https://arxiv.org/abs/1909.02027) dataset for customer intent detection. The dataset can be found on the [Hub](https://huggingface.co/datasets/clinc_oos). The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
transformersbook/codeparrot-small
transformersbook
2022-02-05T16:28:36Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# CodeParrot CodeParrot (small) is a 110M parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
transformersbook/codeparrot
transformersbook
2022-02-05T16:27:42Z
18
5
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# CodeParrot CodeParrot (large) is a 1.5B parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
HarrisDePerceptron/xls-r-300m-ur-cv7
HarrisDePerceptron
2022-02-05T11:21:29Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ur", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - ur license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 1.2924 - Wer: 0.7201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 11.2783 | 4.17 | 100 | 4.6409 | 1.0 | | 3.5578 | 8.33 | 200 | 3.1649 | 1.0 | | 3.1279 | 12.5 | 300 | 3.0335 | 1.0 | | 2.9944 | 16.67 | 400 | 2.9526 | 0.9983 | | 2.9275 | 20.83 | 500 | 2.9291 | 1.0009 | | 2.8077 | 25.0 | 600 | 2.5633 | 0.9895 | | 2.4438 | 29.17 | 700 | 1.9045 | 0.9564 | | 1.9659 | 33.33 | 800 | 1.4114 | 0.7960 | | 1.7092 | 37.5 | 900 | 1.2584 | 0.7637 | | 1.517 | 41.67 | 1000 | 1.2040 | 0.7507 | | 1.3966 | 45.83 | 1100 | 1.1273 | 0.7463 | | 1.3197 | 50.0 | 1200 | 1.1054 | 0.6957 | | 1.2476 | 54.17 | 1300 | 1.1035 | 0.7001 | | 1.1796 | 58.33 | 1400 | 1.0890 | 0.7097 | | 1.1237 | 62.5 | 1500 | 1.0883 | 0.7167 | | 1.0777 | 66.67 | 1600 | 1.1067 | 0.7219 | | 1.0051 | 70.83 | 1700 | 1.1115 | 0.7236 | | 0.9521 | 75.0 | 1800 | 1.0867 | 0.7132 | | 0.9147 | 79.17 | 1900 | 1.0852 | 0.7210 | | 0.8798 | 83.33 | 2000 | 1.1411 | 0.7097 | | 0.8317 | 87.5 | 2100 | 1.1634 | 0.7018 | | 0.7946 | 91.67 | 2200 | 1.1621 | 0.7201 | | 0.7594 | 95.83 | 2300 | 1.1482 | 0.7036 | | 0.729 | 100.0 | 2400 | 1.1493 | 0.7062 | | 0.7055 | 104.17 | 2500 | 1.1726 | 0.6931 | | 0.6622 | 108.33 | 2600 | 1.1938 | 0.7001 | | 0.6583 | 112.5 | 2700 | 1.1832 | 0.7149 | | 0.6299 | 116.67 | 2800 | 1.1996 | 0.7175 | | 0.5903 | 120.83 | 2900 | 1.1986 | 0.7132 | | 0.5816 | 125.0 | 3000 | 1.1909 | 0.7010 | | 0.5583 | 129.17 | 3100 | 1.2079 | 0.6870 | | 0.5392 | 133.33 | 3200 | 1.2109 | 0.7228 | | 0.5412 | 137.5 | 3300 | 1.2353 | 0.7245 | | 0.5136 | 141.67 | 3400 | 1.2390 | 0.7254 | | 0.5007 | 145.83 | 3500 | 1.2273 | 0.7123 | | 0.4883 | 150.0 | 3600 | 1.2773 | 0.7289 | | 0.4835 | 154.17 | 3700 | 1.2678 | 0.7289 | | 0.4568 | 158.33 | 3800 | 1.2592 | 0.7350 | | 0.4525 | 162.5 | 3900 | 1.2705 | 0.7254 | | 0.4379 | 166.67 | 4000 | 1.2717 | 0.7306 | | 0.4198 | 170.83 | 4100 | 1.2618 | 0.7219 | | 0.4216 | 175.0 | 4200 | 1.2909 | 0.7158 | | 0.4305 | 179.17 | 4300 | 1.2808 | 0.7167 | | 0.399 | 183.33 | 4400 | 1.2750 | 0.7193 | | 0.3937 | 187.5 | 4500 | 1.2719 | 0.7149 | | 0.3905 | 191.67 | 4600 | 1.2816 | 0.7158 | | 0.3892 | 195.83 | 4700 | 1.2951 | 0.7210 | | 0.3932 | 200.0 | 4800 | 1.2924 | 0.7201 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
ajitrajasekharan/biomedical
ajitrajasekharan
2022-02-05T08:44:05Z
6
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - {en} # Example: fr license: mit widget: - text: "Lou Gehrig who works for XCorp and lives in New York suffers from [MASK]" example_title: "Test for entity type: Disease" - text: "Overexpression of [MASK] occurs across a wide range of cancers" example_title: "Test for entity type: Gene" - text: "Patients treated with [MASK] are vulnerable to infectious diseases" example_title: "Test for entity type: Drug" - text: "A eGFR level below [MASK] indicates chronic kidney disease" example_title: "Test for entity type: Measure " - text: "In the [MASK], increased daily imatinib dose induced MMR" example_title: "Test for entity type: STUDY/TRIAL" - text: "Paul Erdos died at [MASK]" example_title: "Test for entity type: TIME" inference: parameters: top_k: 10 tags: - {fill-mask} # Example: audio - exbert --- This **cased model** was pretrained from scratch using a custom vocabulary on the following corpora - Pubmed - Clinical trials corpus - and a small subset of Bookcorpus The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20) [App in Spaces](https://huggingface.co/spaces/ajitrajasekharan/self-supervised-ner-biomedical) demonstrates this approach. [Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased. The ensemble detects 69 entity subtypes (17 broad entity groups) <img src="https://ajitrajasekharan.github.io/images/1.png" width="600"> ### Ensemble model performance <img src="https://ajitrajasekharan.github.io/images/6.png" width="600"> ### Additional notes - The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER. - Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results. - [Use this link](https://huggingface.co/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation) to examine both fill-mask prediction and [CLS] predictions ### License MIT license <a href="https://huggingface.co/exbert/?model=ajitrajasekharan/biomedical&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
jinlmsft/t5-large-multiwoz
jinlmsft
2022-02-04T23:08:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-large-multiwoz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-large-multiwoz This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0064 - Acc: 1.0 - True Num: 56671 - Num: 56776 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Acc | True Num | Num | |:-------------:|:-----:|:----:|:---------------:|:----:|:--------:|:-----:| | 0.1261 | 1.13 | 1000 | 0.0933 | 0.98 | 55574 | 56776 | | 0.0951 | 2.25 | 2000 | 0.0655 | 0.98 | 55867 | 56776 | | 0.0774 | 3.38 | 3000 | 0.0480 | 0.99 | 56047 | 56776 | | 0.0584 | 4.51 | 4000 | 0.0334 | 0.99 | 56252 | 56776 | | 0.042 | 5.64 | 5000 | 0.0222 | 0.99 | 56411 | 56776 | | 0.0329 | 6.76 | 6000 | 0.0139 | 1.0 | 56502 | 56776 | | 0.0254 | 7.89 | 7000 | 0.0094 | 1.0 | 56626 | 56776 | | 0.0214 | 9.02 | 8000 | 0.0070 | 1.0 | 56659 | 56776 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
huggingartists/andre-3000
huggingartists
2022-02-04T22:00:23Z
7
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/andre-3000", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/andre-3000 tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/64b15c9489c65f5bf8f6577334347404.434x434x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">André 3000</div> <a href="https://genius.com/artists/andre-3000"> <div style="text-align: center; font-size: 14px;">@andre-3000</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from André 3000. Dataset is available [here](https://huggingface.co/datasets/huggingartists/andre-3000). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/andre-3000") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2hnhboqf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on André 3000's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1mydp6nh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1mydp6nh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/andre-3000') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/andre-3000") model = AutoModelWithLMHead.from_pretrained("huggingartists/andre-3000") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
bluebalam/paper-rec
bluebalam
2022-02-04T21:37:35Z
0
3
null
[ "recsys", "pytorch", "sentence_transformers", "en", "arxiv:2109.03955", "arxiv:1908.10084", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en license: mit tags: - recsys - pytorch - sentence_transformers #datasets: #- {dataset_0} # Example: common_voice. Use dataset id from https://hf.co/datasets #metrics: #- {metric_0} # Example: wer. Use metric id from https://hf.co/metrics --- # `paper-rec` Model Card Last updated: 2022-02-04 ## Model Details `paper-rec` goal is to recommend users what scientific papers to read next based on their preferences. This is a test model used to explore Hugging Face Hub capabilities and identify requirements to enable support for recommendation task in the ecosystem. ### Model date 2022-02-04 ### Model type Recommender System model with support of a Language Model for feature extraction. ### Paper & samples The overall idea for `paper-rec` test model is inspired by this work: [NU:BRIEF – A Privacy-aware Newsletter Personalization Engine for Publishers](https://arxiv.org/abs/2109.03955). However, for `paper-rec`, we use a different language model more suitable for longer text, namely *Sentence Transformers*: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084), in particular: [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). ## Model Use The intended direct users are recommender systems' practitioners and enthusiasts that would like to experiment with the task of scientific paper recommendation. ## Data, Performance, and Limitations ### Data The data used for this model corresponds to the [RSS news feeds for arXiv updates](https://arxiv.org/help/rss) accessed on 2022-02-04. In particular to the ones related to Machine Learning and AI: 1. [Artificial Intelligence](http://arxiv.org/rss/cs.AI) 1. [Computation and Language](http://arxiv.org/rss/cs.CL) 1. [Computer Vision and Pattern Recognition](http://arxiv.org/rss/cs.CV) 1. [Information Retrieval](http://arxiv.org/rss/cs.IR) 1. [Machine Learning (cs)](http://arxiv.org/rss/cs.LG) 1. [Machine Learning (stat)](http://arxiv.org/rss/stat.ML) ### Performance N/A ## Limitations The model is limited to the papers fetched on 2022-02-04, that is, those papers are the only ones it can recommend.
hyperion-ml/voxceleb-v1.1-fbank80_stmn_lresnet34_e256_arcs30m0.3_do0_adam_lr0.05_b512.v1
hyperion-ml
2022-02-04T21:20:32Z
5
1
null
[ "hyperion", "audio", "speech", "speaker-recognition", "x-vector", "thin-resnet34", "en", "dataset:voxceleb", "license:apache-2.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - hyperion - audio - speech - speaker-recognition - x-vector - thin-resnet34 datasets: - voxceleb metrics: - eer - min_dcf-p=0.05 - min_dcf-p=0.01 model-index: - name: voxceleb-v1.1-fbank80_stmn_lresnet34_e256_arcs30m0.3_do0_adam_lr0.05_b512.v1 results: - task: type: speaker-verification name: Speaker Verification dataset: type: voxceleb1 name: Voxceleb1 args: Train on VoxCeleb2-dev metrics: - type: eer value: 2.11 name: EER Vox1-O - type: min_dcf-p=0.05 value: 0.135 name: Minimum DCF Vox1-O prior=0.05 - type: act_dcf-p=0.01 value: 0.208 name: Minimum DCF Vox1-O prior=0.01 - type: eer value: 1.93 name: EER Vox1-E - type: min_dcf-p=0.05 value: 0.121 name: Minimum DCF Vox1-E prior=0.05 - type: act_dcf-p=0.01 value: 0.204 name: Minimum DCF Vox1-E Original prior=0.01 - type: eer value: 3.21 name: EER Vox1-H - type: min_dcf-p=0.05 value: 0.190 name: Minimum DCF Vox1-H prior=0.05 - type: act_dcf-p=0.01 value: 0.298 name: Minimum DCF Vox1-H Original prior=0.01 --- # Hyperion Toolkit Speaker Verification pre-trained Model ## Model Configuration This model was trained using recipe [voxceleb/v1.1](https://github.com/hyperion-ml/hyperion/tree/master/egs/voxceleb/v1.1) The configuration for this modeis is defined in [config_fbank80_stmn_lresnet34_arcs30m0.3_adam_lr0.05_amp.v1.sh](https://github.com/hyperion-ml/hyperion/blob/master/egs/voxceleb/v1.1/global_conf/config_fbank80_stmn_lresnet34_arcs30m0.3_adam_lr0.05_amp.v1.sh) This is an x-vector model with: - 80 logMel filter-banks with short-time mean normalization. - ThinResNet34 (aka Light ResNet34) encoder. - Mean+Stddev pooling - AAM-softmax loss (m=0.3, s=30) - Mixed prec. training.
BigSalmon/InformalToFormalLincoln20
BigSalmon
2022-02-04T20:56:17Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Informal to Formal: Wordy to Concise: Fill Missing Phrase: ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln20") model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln20") ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time) ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ```` ``` infill: increasing the number of sidewalks in suburban areas will [MASK]. Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ). infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago. infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly. Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly. infill: ``` ``` *** wordy: chancing upon a linux user is a rare occurrence in the present day. Translate into Concise Text: present-day linux users are rare. *** wordy: an interest in classical music is becoming more and more less popular. Translate into Concise Text: classical music appreciation is dwindling. Translate into Concise Text: waning interest in classic music persists. Translate into Concise Text: interest in classic music is fading. *** wordy: the ice cream was only one dollar, but it was not a good value for the size. Translate into Concise Text: the one dollar ice cream was overpriced for its size. Translate into Concise Text: overpriced, the one dollar ice cream was small. *** wordy: ```
tesemnikov-av/NER-RUBERT-Per-Loc-Org
tesemnikov-av
2022-02-04T19:40:56Z
7
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- widget: - text: "В город Сергиев Посад приехал Курт Кобейн." --- Fine-tuning [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model on sentences from Wiki auto annotated with PER, LOC, ORG tags [corus/WiNER](https://pypi.org/project/corus/#reference) language: RU NER Class: - PER - LOC - ORG license: mit
LenaSchmidt/distilbert-base-uncased-finetuned-squad
LenaSchmidt
2022-02-04T19:20:11Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0325 | 1.0 | 585 | 1.7520 | | 1.609 | 2.0 | 1170 | 1.7713 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
cahya/wav2vec2-base-turkish-cv8
cahya
2022-02-04T14:30:19Z
5
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "tr", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - tr tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [./checkpoint-1000](https://huggingface.co/./checkpoint-1000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.3282 - Wer: 0.2836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 96 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0671 | 2.04 | 200 | 0.3079 | 0.2752 | | 0.6433 | 4.08 | 400 | 0.2728 | 0.2848 | | 0.5687 | 6.12 | 600 | 0.2882 | 0.3036 | | 0.5355 | 8.16 | 800 | 0.2778 | 0.2920 | | 0.5116 | 10.2 | 1000 | 0.2906 | 0.3014 | | 0.5313 | 9.16 | 1200 | 0.2984 | 0.3273 | | 0.4996 | 10.69 | 1400 | 0.3170 | 0.3344 | | 0.4845 | 12.21 | 1600 | 0.3202 | 0.3634 | | 0.5092 | 13.74 | 1800 | 0.3167 | 0.3373 | | 0.4777 | 15.27 | 2000 | 0.3292 | 0.3386 | | 0.4651 | 16.79 | 2200 | 0.3070 | 0.3427 | | 0.461 | 18.32 | 2400 | 0.3149 | 0.3561 | | 0.4481 | 19.85 | 2600 | 0.3292 | 0.3441 | | 0.4479 | 21.37 | 2800 | 0.3142 | 0.3209 | | 0.4305 | 22.9 | 3000 | 0.3525 | 0.3547 | | 0.4254 | 24.43 | 3200 | 0.3414 | 0.3400 | | 0.4066 | 25.95 | 3400 | 0.3118 | 0.3207 | | 0.4043 | 27.48 | 3600 | 0.3418 | 0.3483 | | 0.3985 | 29.01 | 3800 | 0.3254 | 0.3166 | | 0.3982 | 30.53 | 4000 | 0.3306 | 0.3453 | | 0.3929 | 32.06 | 4200 | 0.3262 | 0.3229 | | 0.378 | 33.59 | 4400 | 0.3546 | 0.3336 | | 0.4062 | 35.11 | 4600 | 0.3174 | 0.3457 | | 0.3648 | 36.64 | 4800 | 0.3377 | 0.3357 | | 0.3609 | 38.17 | 5000 | 0.3346 | 0.3520 | | 0.3483 | 39.69 | 5200 | 0.3350 | 0.3526 | | 0.3548 | 41.22 | 5400 | 0.3330 | 0.3406 | | 0.3446 | 42.75 | 5600 | 0.3398 | 0.3372 | | 0.3346 | 44.27 | 5800 | 0.3449 | 0.3288 | | 0.3309 | 45.8 | 6000 | 0.3320 | 0.3144 | | 0.326 | 47.33 | 6200 | 0.3400 | 0.3279 | | 0.3189 | 48.85 | 6400 | 0.3400 | 0.3150 | | 0.3165 | 50.38 | 6600 | 0.3359 | 0.2995 | | 0.3132 | 51.91 | 6800 | 0.3343 | 0.3096 | | 0.3092 | 53.44 | 7000 | 0.3224 | 0.3029 | | 0.2995 | 54.96 | 7200 | 0.3205 | 0.2985 | | 0.304 | 56.49 | 7400 | 0.3523 | 0.3034 | | 0.2952 | 58.02 | 7600 | 0.3289 | 0.2934 | | 0.2875 | 59.54 | 7800 | 0.3350 | 0.3008 | | 0.2868 | 61.07 | 8000 | 0.3537 | 0.3227 | | 0.2875 | 62.6 | 8200 | 0.3389 | 0.2970 | | 0.2778 | 64.12 | 8400 | 0.3370 | 0.2960 | | 0.2706 | 65.65 | 8600 | 0.3250 | 0.2802 | | 0.2669 | 67.18 | 8800 | 0.3351 | 0.2903 | | 0.2615 | 68.7 | 9000 | 0.3382 | 0.2989 | | 0.2563 | 70.23 | 9200 | 0.3312 | 0.2975 | | 0.2546 | 71.76 | 9400 | 0.3212 | 0.3003 | | 0.2482 | 73.28 | 9600 | 0.3337 | 0.3091 | | 0.2504 | 74.81 | 9800 | 0.3308 | 0.3110 | | 0.2456 | 76.34 | 10000 | 0.3157 | 0.3118 | | 0.2363 | 77.86 | 10200 | 0.3251 | 0.3144 | | 0.2319 | 79.39 | 10400 | 0.3253 | 0.3038 | | 0.2266 | 80.92 | 10600 | 0.3374 | 0.3038 | | 0.2279 | 82.44 | 10800 | 0.3268 | 0.2964 | | 0.2231 | 83.97 | 11000 | 0.3278 | 0.2950 | | 0.2185 | 85.5 | 11200 | 0.3462 | 0.2981 | | 0.2245 | 87.02 | 11400 | 0.3311 | 0.2895 | | 0.223 | 88.55 | 11600 | 0.3325 | 0.2877 | | 0.2121 | 90.08 | 11800 | 0.3337 | 0.2828 | | 0.2126 | 91.6 | 12000 | 0.3325 | 0.2808 | | 0.2027 | 93.13 | 12200 | 0.3277 | 0.2820 | | 0.2058 | 94.66 | 12400 | 0.3308 | 0.2827 | | 0.1991 | 96.18 | 12600 | 0.3279 | 0.2820 | | 0.1991 | 97.71 | 12800 | 0.3300 | 0.2822 | | 0.1986 | 99.24 | 13000 | 0.3285 | 0.2835 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
Language-Media-Lab/mt5-small-ain-jpn-mt
Language-Media-Lab
2022-02-04T13:20:55Z
5
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "translation", "jpn", "ain", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - jpn - ain tags: - translation --- mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
Language-Media-Lab/byt5-small-jpn-ain-mt
Language-Media-Lab
2022-02-04T13:02:58Z
14
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "translation", "jpn", "ain", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - jpn - ain tags: - translation --- Byt5-small-jpn-ain-mt is a machine translation model pretrained with [Google's ByT5-small](https://huggingface.co/google/byt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Japanese to Ainu language.
Plim/xls-r-1b-fr
Plim
2022-02-04T11:45:21Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "fr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - fr license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2464 - Wer: 0.2220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0326 | 0.32 | 1000 | 0.3092 | 0.2718 | | 1.0828 | 0.65 | 2000 | 0.2843 | 0.2606 | | 1.0771 | 0.97 | 3000 | 0.2774 | 0.2488 | | 1.0306 | 1.3 | 4000 | 0.2588 | 0.2351 | | 1.0052 | 1.62 | 5000 | 0.2483 | 0.2284 | | 0.9865 | 1.94 | 6000 | 0.2464 | 0.2220 | | 0.978 | 2.27 | 7000 | 0.2514 | 0.2172 | | 1.7438 | 2.59 | 8000 | 0.7983 | 0.5072 | | 2.3309 | 2.92 | 9000 | 1.8917 | 0.9416 | | 2.1834 | 3.24 | 10000 | 1.7496 | 0.9030 | | 2.3047 | 3.56 | 11000 | 1.5377 | 0.8747 | | 2.1378 | 3.89 | 12000 | 1.3501 | 0.7923 | | 1.9812 | 4.21 | 13000 | 1.2662 | 0.7697 | | 2.6855 | 4.54 | 14000 | 2.4120 | 0.9902 | | 2.7482 | 4.86 | 15000 | 2.5341 | 0.9874 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1
Subhashini17
2022-02-04T11:14:25Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-ta-colab-new1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ta-colab-new1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6642 - eval_wer: 0.7611 - eval_runtime: 152.4412 - eval_samples_per_second: 11.683 - eval_steps_per_second: 1.463 - epoch: 10.11 - step: 960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
ai-forever/bert-base-NER-reptile-5-datasets
ai-forever
2022-02-04T10:51:07Z
38
3
transformers
[ "transformers", "pytorch", "bert", "token-classification", "PyTorch", "en", "dataset:conll2003", "dataset:wnut_17", "dataset:jnlpba", "dataset:conll2012", "dataset:BTC", "dataset:dfki-nlp/few-nerd", "arxiv:2010.02405", "model-index", "autotrain_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - en inference: false pipeline_tag: false datasets: - conll2003 - wnut_17 - jnlpba - conll2012 - BTC - dfki-nlp/few-nerd tags: - PyTorch model-index: - name: "bert-base-NER-reptile-5-datasets" results: - task: name: few-shot-ner type: named-entity-recognition dataset: name: few-nerd-inter type: named-entity-recognition metrics: - name: 5 way 1~2 shot type: f1 value: 56.12 - name: 5-way 5~10-shot type: f1 value: 62.7 - name: 10-way 1~2-shot type: f1 value: 50.3 - name: 10-way 5~10-shot type: f1 value: 58.82 --- # BERT base uncased model pre-trained on 5 NER datasets Model was trained by _SberIDP_. The pretraining process and technical details are described [in this article](https://habr.com/ru/company/sberbank/blog/649609/). * Task: Named Entity Recognition * Base model: [bert-base-uncased](https://huggingface.co/bert-base-uncased) * Training Data is 5 datasets: [CoNLL-2003](https://aclanthology.org/W03-0419.pdf), [WNUT17](http://noisy-text.github.io/2017/emerging-rare-entities.html), [JNLPBA](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004), [CoNLL-2012 (OntoNotes)](https://aclanthology.org/W12-4501.pdf), [BTC](https://www.derczynski.com/papers/btc.pdf) * Testing was made in Few-Shot scenario on [Few-NERD dataset](https://github.com/thunlp/Few-NERD) using the model as a backbone for [StructShot](https://arxiv.org/abs/2010.02405) The model is pretrained for NER task using [Reptile](https://openai.com/blog/reptile/) and can be finetuned for new entities with only a small amount of samples.
yohida/yoshida_gpt
yohida
2022-02-04T10:13:45Z
4
0
transformers
[ "transformers", "gpt2", "text-generation", "ja", "japanese", "gpt", "lm", "nlp", "dataset:cc100", "dataset:wikipedia", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: ja thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png tags: - ja - japanese - gpt - text-generation - lm - nlp license: mit datasets: - cc100 - wikipedia widget: - text: "西田幾多郎は、" --- # japanese-gpt-1b ![rinna-icon](./rinna.png) This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/) # How to use the model *NOTE:* Use `T5Tokenizer` to initiate the tokenizer. ~~~~ import torch from transformers import T5Tokenizer, AutoModelForCausalLM tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b") model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b") if torch.cuda.is_available(): model = model.to("cuda") text = "西田幾多郎は、" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_length=100, min_length=100, do_sample=True, top_k=500, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, bad_word_ids=[[tokenizer.unk_token_id]] ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) # sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの ~~~~ # Model architecture A 24-layer, 2048-hidden-size transformer-based language model. # Training The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data. # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols. # Licenese [The MIT license](https://opensource.org/licenses/MIT)
huggingtweets/dril-heroicvillain95
huggingtweets
2022-02-04T08:49:44Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1402535431523217411/h07KN7VS_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & casually Jesse</div> <div style="text-align: center; font-size: 14px;">@dril-heroicvillain95</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & casually Jesse. | Data | wint | casually Jesse | | --- | --- | --- | | Tweets downloaded | 3228 | 2663 | | Retweets | 475 | 133 | | Short tweets | 305 | 353 | | Tweets kept | 2448 | 2177 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3u36b2x8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-heroicvillain95's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-heroicvillain95') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LegolasTheElf/Wav2Vec2_xls_r_openslr_Hi_V2
LegolasTheElf
2022-02-04T07:53:30Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "Harveenchadha/indic-voice", "generated_from_trainer", "hi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 language: - hi tags: - automatic-speech-recognition - Harveenchadha/indic-voice - generated_from_trainer model-index: - name: Wav2Vec2_xls_r_openslr_Hi_V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wav2Vec2_xls_r_openslr_Hi_V2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Harveenchadha/indic-voice](https://huggingface.co/datasets/Harveenchadha/indic-voice) dataset. It achieves the following results on the evaluation set: - Loss: 0.3184 - Wer: 0.3104 - Cer: 0.0958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | Wer | |:-------------:|:-----:|:----:|:------:|:---------------:|:------:| | 7.1097 | 0.48 | 300 | 0.9965 | 3.3989 | 1.0 | | 3.0235 | 0.96 | 600 | 0.3163 | 1.3183 | 0.7977 | | 1.1419 | 1.44 | 900 | 0.1913 | 0.6416 | 0.5543 | | 0.8242 | 1.92 | 1200 | 0.1608 | 0.5063 | 0.4804 | | 0.6876 | 2.56 | 1600 | 0.1387 | 0.4401 | 0.4280 | | 0.5868 | 3.21 | 2000 | 0.1249 | 0.3940 | 0.3907 | | 0.5285 | 3.85 | 2400 | 0.1200 | 0.3661 | 0.3763 | | 0.5 | 4.49 | 2800 | 0.3528 | 0.3610 | 0.1136 | | 0.4538 | 5.13 | 3200 | 0.3403 | 0.3485 | 0.1086 | | 0.4165 | 5.77 | 3600 | 0.3335 | 0.3439 | 0.1062 | | 0.3989 | 6.41 | 4000 | 0.3264 | 0.3340 | 0.1036 | | 0.3679 | 7.05 | 4400 | 0.3256 | 0.3287 | 0.1013 | | 0.3517 | 7.69 | 4800 | 0.3212 | 0.3223 | 0.1002 | | 0.3357 | 8.33 | 5200 | 0.3173 | 0.3196 | 0.0986 | | 0.3225 | 8.97 | 5600 | 0.3142 | 0.3177 | 0.0985 | | 0.3057 | 9.62 | 6000 | 0.3199 | 0.3156 | 0.0975 | | 0.2972 | 10.26 | 6400 | 0.3139 | 0.3128 | 0.0967 | | 0.2881 | 10.9 | 6800 | 0.3184 | 0.3107 | 0.0957 | | 0.2791 | 11.54 | 7200 | 0.3184 | 0.3104 | 0.0958 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
jonc/distilbert-base-uncased-finetuned-emotion
jonc
2022-02-04T06:15:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9230733583303665 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2159 - Accuracy: 0.923 - F1: 0.9231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8494 | 1.0 | 250 | 0.3134 | 0.907 | 0.9051 | | 0.2504 | 2.0 | 500 | 0.2159 | 0.923 | 0.9231 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
ghofrani/common7
ghofrani
2022-02-04T01:32:24Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "fa", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - fa tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: common7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # common7 This model is a fine-tuned version of [common7/checkpoint-18500](https://huggingface.co/common7/checkpoint-18500) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FA dataset. It achieves the following results on the evaluation set: - Loss: 0.3448 - Wer: 0.3478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 150.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 2.957 | 3.29 | 500 | 2.9503 | 1.0 | | 1.7225 | 6.58 | 1000 | 0.8860 | 0.7703 | | 1.4907 | 9.86 | 1500 | 0.6555 | 0.6673 | | 1.4177 | 13.16 | 2000 | 0.5784 | 0.6076 | | 1.3425 | 16.45 | 2500 | 0.5379 | 0.5718 | | 1.33 | 19.73 | 3000 | 0.4962 | 0.5245 | | 1.4378 | 23.03 | 3500 | 0.4699 | 0.5098 | | 1.1894 | 26.31 | 4000 | 0.4527 | 0.4848 | | 1.1844 | 29.6 | 4500 | 0.4309 | 0.4651 | | 1.1795 | 32.89 | 5000 | 0.4131 | 0.4524 | | 1.1471 | 36.18 | 5500 | 0.4052 | 0.4435 | | 1.1337 | 39.47 | 6000 | 0.3927 | 0.4363 | | 1.1896 | 42.76 | 6500 | 0.3811 | 0.4254 | | 1.1847 | 46.05 | 7000 | 0.3855 | 0.4129 | | 0.9954 | 49.34 | 7500 | 0.3729 | 0.3981 | | 1.0293 | 52.63 | 8000 | 0.3637 | 0.4014 | | 1.0224 | 55.92 | 8500 | 0.3578 | 0.3885 | | 1.012 | 59.21 | 9000 | 0.3629 | 0.3930 | | 1.0772 | 62.5 | 9500 | 0.3635 | 0.3906 | | 1.0344 | 65.79 | 10000 | 0.3469 | 0.3771 | | 0.9457 | 69.08 | 10500 | 0.3435 | 0.3735 | | 0.9307 | 72.37 | 11000 | 0.3519 | 0.3762 | | 0.9523 | 75.65 | 11500 | 0.3443 | 0.3666 | | 0.9523 | 78.94 | 12000 | 0.3502 | 0.3757 | | 0.9475 | 82.24 | 12500 | 0.3509 | 0.3643 | | 0.9971 | 85.52 | 13000 | 0.3502 | 0.3626 | | 0.9058 | 88.81 | 13500 | 0.3472 | 0.3605 | | 0.8922 | 92.1 | 14000 | 0.3530 | 0.3618 | | 0.9 | 95.39 | 14500 | 0.3500 | 0.3574 | | 0.9051 | 98.68 | 15000 | 0.3456 | 0.3535 | | 0.9304 | 101.97 | 15500 | 0.3438 | 0.3578 | | 0.9433 | 105.26 | 16000 | 0.3396 | 0.3530 | | 0.8988 | 108.55 | 16500 | 0.3436 | 0.3539 | | 0.8789 | 111.84 | 17000 | 0.3426 | 0.3516 | | 0.8667 | 115.13 | 17500 | 0.3438 | 0.3506 | | 0.8895 | 118.42 | 18000 | 0.3434 | 0.3503 | | 0.8888 | 121.71 | 18500 | 0.3425 | 0.3494 | | 0.9453 | 125.0 | 19000 | 0.3415 | 0.3480 | | 0.9267 | 128.29 | 19500 | 0.3477 | 0.3503 | | 0.8315 | 131.58 | 20000 | 0.3476 | 0.3505 | | 0.8542 | 134.86 | 20500 | 0.3475 | 0.3506 | | 0.8478 | 138.16 | 21000 | 0.3430 | 0.3481 | | 0.8643 | 141.45 | 21500 | 0.3451 | 0.3485 | | 0.8705 | 144.73 | 22000 | 0.3444 | 0.3474 | | 0.9869 | 148.03 | 22500 | 0.3441 | 0.3493 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3.dev0 - Tokenizers 0.10.3
am-shb/bert-base-multilingual-cased-finetuned
am-shb
2022-02-03T21:59:27Z
3
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: '57426955' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 57426955 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 16 - seed: 1337 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.11.2 - Pytorch 1.10.0 - Datasets 1.8.0 - Tokenizers 0.10.3
hyesunyun/NonsenseUpdateDiffIntBart
hyesunyun
2022-02-03T17:14:33Z
15
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "diff generation", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - en tags: - summarization - diff generation datasets: - nonsense corpus metrics: - rouge --- hello! this is the pretrained BART. The dataset used for pretraining is nonsense summary corpus with output as diff.
ArBert/roberta-base-finetuned-ner
ArBert
2022-02-03T16:42:50Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0738 - Precision: 0.9232 - Recall: 0.9437 - F1: 0.9333 - Accuracy: 0.9825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1397 | 1.0 | 1368 | 0.0957 | 0.9141 | 0.9048 | 0.9094 | 0.9753 | | 0.0793 | 2.0 | 2736 | 0.0728 | 0.9274 | 0.9324 | 0.9299 | 0.9811 | | 0.0499 | 3.0 | 4104 | 0.0738 | 0.9232 | 0.9437 | 0.9333 | 0.9825 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
BatuhanYilmaz
2022-02-03T15:17:21Z
19
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ArBert/albert-base-v2-finetuned-ner
ArBert
2022-02-03T14:26:33Z
22
4
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: albert-base-v2-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9301181102362205 - name: Recall type: recall value: 0.9376033513394334 - name: F1 type: f1 value: 0.9338457315399397 - name: Accuracy type: accuracy value: 0.9851613086447802 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-ner This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0700 - Precision: 0.9301 - Recall: 0.9376 - F1: 0.9338 - Accuracy: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.096 | 1.0 | 1756 | 0.0752 | 0.9163 | 0.9201 | 0.9182 | 0.9811 | | 0.0481 | 2.0 | 3512 | 0.0761 | 0.9169 | 0.9293 | 0.9231 | 0.9830 | | 0.0251 | 3.0 | 5268 | 0.0700 | 0.9301 | 0.9376 | 0.9338 | 0.9852 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
Ayham/bert_distilgpt2_summarization_cnn_dailymail
Ayham
2022-02-03T13:33:41Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: bert_distilgpt2_summarization_cnn_dailymail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_distilgpt2_summarization_cnn_dailymail This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
diwank/maptask-deberta-pair
diwank
2022-02-03T12:51:24Z
5
1
transformers
[ "transformers", "pytorch", "tf", "deberta", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit --- # maptask-deberta-pair Deberta-based Daily MapTask style dialog-act annotations classification model ## Example ```python from simpletransformers.classification import ( ClassificationModel, ClassificationArgs ) model = ClassificationModel("deberta", "diwank/maptask-deberta-pair") predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]]) convert_to_label = lambda n: ["acknowledge (0), align (1), check (2), clarify (3), explain (4), instruct (5), query_w (6), query_yn (7), ready (8), reply_n (9), reply_w (10), reply_y (11)".split(', ')[i] for i in n] convert_to_label(predictions) # reply_n (9) ```
anuragshas/wav2vec2-xls-r-300m-pa-IN-cv8-with-lm
anuragshas
2022-02-03T12:28:34Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - pa-IN license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: - Loss: 0.6864 - Wer: 0.6707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 4.3322 | 14.81 | 400 | 3.7450 | 1.0 | | 3.2662 | 29.63 | 800 | 3.2571 | 0.9996 | | 1.6408 | 44.44 | 1200 | 0.9098 | 0.8162 | | 1.2289 | 59.26 | 1600 | 0.6757 | 0.7099 | | 1.0551 | 74.07 | 2000 | 0.6417 | 0.7044 | | 0.966 | 88.89 | 2400 | 0.6365 | 0.6789 | | 0.8713 | 103.7 | 2800 | 0.6617 | 0.6954 | | 0.8055 | 118.52 | 3200 | 0.6371 | 0.6762 | | 0.7489 | 133.33 | 3600 | 0.6798 | 0.6911 | | 0.7073 | 148.15 | 4000 | 0.6567 | 0.6731 | | 0.6609 | 162.96 | 4400 | 0.6742 | 0.6840 | | 0.6435 | 177.78 | 4800 | 0.6862 | 0.6633 | | 0.6282 | 192.59 | 5200 | 0.6865 | 0.6731 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
Hetarth/marian-finetuned-hi-hinglish
Hetarth
2022-02-03T09:54:31Z
8
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: marian-finetuned-hi-hinglish results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-hi-hinglish This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.1869 - Validation Loss: 4.0607 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 279, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.1869 | 4.0607 | 0 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
versae/kenlm-5gram-ncc
versae
2022-02-03T08:16:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
pritoms/distilroberta-base-YTTranscript23
pritoms
2022-02-03T05:52:25Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-YTTranscript23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-YTTranscript23 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 70 | 2.9007 | | No log | 2.0 | 140 | 2.9651 | | No log | 3.0 | 210 | 2.9374 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
testimonial/wav2vec2-base-timit-demo-colab
testimonial
2022-02-03T03:07:06Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4688 - Wer: 0.3417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4156 | 4.0 | 500 | 1.2721 | 0.8882 | | 0.6145 | 8.0 | 1000 | 0.4712 | 0.4510 | | 0.229 | 12.0 | 1500 | 0.4459 | 0.3847 | | 0.1312 | 16.0 | 2000 | 0.4739 | 0.3786 | | 0.0897 | 20.0 | 2500 | 0.4483 | 0.3562 | | 0.0608 | 24.0 | 3000 | 0.4450 | 0.3502 | | 0.0456 | 28.0 | 3500 | 0.4688 | 0.3417 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Plim/xls-r-300m-lm-fr
Plim
2022-02-02T23:29:54Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "fr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - fr tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [./checkpoint-6000](https://huggingface.co/./checkpoint-6000) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2619 - Wer: 0.2457 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.495 | 0.16 | 500 | 3.3883 | 1.0 | | 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 | | 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 | | 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 | | 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 | | 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 | | 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 | | 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 | | 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 | | 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 | | 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 | | 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
microsoft/wavlm-large
microsoft
2022-02-02T21:21:50Z
310,610
67
transformers
[ "transformers", "pytorch", "wavlm", "feature-extraction", "speech", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.13900", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - en tags: - speech inference: false --- # WavLM-Large [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm) The large model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei **Abstract** *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm. # Usage This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the [SUPERB benchmark](https://superbbenchmark.org/). **Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence of phonemes before fine-tuning. ## Speech Recognition To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition). ## Speech Classification To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification). ## Speaker Verification TODO ## Speaker Diarization TODO # Contribution The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten). # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)
dropout05/t5-tiny
dropout05
2022-02-02T19:11:43Z
8
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
jonfd/convbert-base-igc-is
jonfd
2022-02-02T17:10:34Z
13
0
transformers
[ "transformers", "pytorch", "tf", "convbert", "feature-extraction", "is", "dataset:igc", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - is license: cc-by-4.0 datasets: - igc --- # Icelandic ConvBERT-Base This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
shaina/covid_qa_mpnet
shaina
2022-02-02T14:33:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "mpnet", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer widget: - text: "What is COVID-19?" context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic." - text: "Where was COVID-19 first discovered?" context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event." - text: "What is Post-COVID syndrome?" context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction." --- # covid_qa_mpnet This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on our COVID-19 dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2477 | 1.0 | 3895 | 0.1869 | | 0.1838 | 2.0 | 7790 | 0.1352 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mbateman/mt5-small-finetuned-amazon-en-es
mbateman
2022-02-02T10:07:07Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0393 - Rouge1: 17.3313 - Rouge2: 8.1251 - Rougel: 17.0359 - Rougelsum: 16.9503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.6665 | 1.0 | 1209 | 3.2917 | 13.908 | 5.5316 | 13.4368 | 13.4302 | | 3.8961 | 2.0 | 2418 | 3.1711 | 16.247 | 8.7234 | 15.7703 | 15.6964 | | 3.5801 | 3.0 | 3627 | 3.0917 | 17.3455 | 8.2467 | 16.8631 | 16.8147 | | 3.4258 | 4.0 | 4836 | 3.0583 | 16.0978 | 7.83 | 15.8065 | 15.7725 | | 3.3154 | 5.0 | 6045 | 3.0573 | 17.5531 | 8.7811 | 17.2252 | 17.2055 | | 3.2438 | 6.0 | 7254 | 3.0479 | 17.2072 | 8.0951 | 17.025 | 16.9644 | | 3.2024 | 7.0 | 8463 | 3.0377 | 17.3692 | 8.1843 | 17.019 | 17.0006 | | 3.1745 | 8.0 | 9672 | 3.0393 | 17.3313 | 8.1251 | 17.0359 | 16.9503 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
beomus/layoutxlm
beomus
2022-02-02T08:21:14Z
8
1
transformers
[ "transformers", "pytorch", "layoutlmv2", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# LayoutXLM finetuned on XFUN.ja ```python import torch import numpy as np from PIL import Image, ImageDraw, ImageFont from pathlib import Path from itertools import chain from tqdm.notebook import tqdm from pdf2image import convert_from_path from transformers import LayoutXLMProcessor, LayoutLMv2ForTokenClassification import os os.environ["TOKENIZERS_PARALLELISM"] = "false" labels = [ 'O', 'B-QUESTION', 'B-ANSWER', 'B-HEADER', 'I-ANSWER', 'I-QUESTION', 'I-HEADER' ] id2label = {v: k for v, k in enumerate(labels)} label2id = {k: v for v, k in enumerate(labels)} def unnormalize_box(bbox, width, height): return [ width * (bbox[0] / 1000), height * (bbox[1] / 1000), width * (bbox[2] / 1000), height * (bbox[3] / 1000), ] def iob_to_label(label): label = label[2:] if not label: return 'other' return label label2color = {'question':'blue', 'answer':'green', 'header':'orange', 'other':'violet'} def infer(image, processor, model, label2color): # Use this if you're loading images # image = Image.open(img_path).convert("RGB") image = image.convert("RGB") # loading PDFs encoding = processor(image, return_offsets_mapping=True, return_tensors="pt", truncation=True, max_length=514) offset_mapping = encoding.pop('offset_mapping') outputs = model(**encoding) predictions = outputs.logits.argmax(-1).squeeze().tolist() token_boxes = encoding.bbox.squeeze().tolist() width, height = image.size is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0 true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]] true_boxes = [unnormalize_box(box, width, height) for idx, box in enumerate(token_boxes) if not is_subword[idx]] draw = ImageDraw.Draw(image) font = ImageFont.load_default() for prediction, box in zip(true_predictions, true_boxes): predicted_label = iob_to_label(prediction).lower() draw.rectangle(box, outline=label2color[predicted_label]) draw.text((box[0]+10, box[1]-10), text=predicted_label, fill=label2color[predicted_label], font=font) return image processor = LayoutXLMProcessor.from_pretrained('beomus/layoutxlm') model = LayoutLMv2ForTokenClassification.from_pretrained("beomus/layoutxlm") # imgs = [img_path for img_path in Path('/your/path/imgs/').glob('*.jpg')] imgs = [convert_from_path(img_path) for img_path in Path('/your/path/pdfs/').glob('*.pdf')] imgs = list(chain.from_iterable(imgs)) outputs = [infer(img_path, processor, model, label2color) for img_path in tqdm(imgs)] # type(outputs[0]) -> PIL.Image.Image ```
navsad/navid_test_bert
navsad
2022-02-02T04:52:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: navid_test_bert results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5834463254140851 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # navid_test_bert This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8149 - Matthews Correlation: 0.5834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4598 | 1.0 | 1069 | 0.4919 | 0.5314 | | 0.3228 | 2.0 | 2138 | 0.6362 | 0.5701 | | 0.17 | 3.0 | 3207 | 0.8149 | 0.5834 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
tal-yifat/bert-injury-classifier
tal-yifat
2022-02-02T04:35:44Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-injury-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-injury-classifier This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6915 - Accuracy: 0.5298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6676 | 1.0 | 19026 | 0.6635 | 0.6216 | | 0.6915 | 2.0 | 38052 | 0.6915 | 0.5298 | | 0.6924 | 3.0 | 57078 | 0.6915 | 0.5298 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
CalvinHuang/mt5-small-finetuned-amazon-en-es
CalvinHuang
2022-02-02T03:50:37Z
18
1
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0393 - Rouge1: 17.2936 - Rouge2: 8.0678 - Rougel: 16.8129 - Rougelsum: 16.9991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.6665 | 1.0 | 1209 | 3.2917 | 13.912 | 5.595 | 13.2984 | 13.4171 | | 3.8961 | 2.0 | 2418 | 3.1711 | 16.2845 | 8.6033 | 15.5509 | 15.7383 | | 3.5801 | 3.0 | 3627 | 3.0917 | 17.316 | 8.122 | 16.697 | 16.773 | | 3.4258 | 4.0 | 4836 | 3.0583 | 16.1347 | 7.7829 | 15.6475 | 15.7804 | | 3.3154 | 5.0 | 6045 | 3.0573 | 17.5918 | 8.7349 | 17.0537 | 17.2216 | | 3.2438 | 6.0 | 7254 | 3.0479 | 17.2294 | 8.0383 | 16.8141 | 16.9858 | | 3.2024 | 7.0 | 8463 | 3.0377 | 17.2918 | 8.139 | 16.8178 | 16.9671 | | 3.1745 | 8.0 | 9672 | 3.0393 | 17.2936 | 8.0678 | 16.8129 | 16.9991 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
BigSalmon/InfillFormalLincoln
BigSalmon
2022-02-02T03:45:03Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Informal to Formal: ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InfillFormalLincoln") model = AutoModelWithLMHead.from_pretrained("BigSalmon/InfillFormalLincoln") ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time) ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time) ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ```` ``` infill: increasing the number of sidewalks in suburban areas will [MASK]. Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ). infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago. infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly. Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly. infill: ```
cahya/wav2vec2-base-turkish-artificial-cv
cahya
2022-02-01T19:34:46Z
14
4
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "tr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: tr datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Wav2Vec2 Base Turkish by Cahya results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice tr type: common_voice args: tr metrics: - name: Test WER type: wer value: 13.70 --- # Wav2Vec2-Large-XLSR-Turkish This is the model for Wav2Vec2-Base-Turkish-Artificial-CV, a fine-tuned [cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial) model on [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv") # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 13.70 % ## Training The Common Voice `train`, `validation`, other and invalidated The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
conjuring92/distilroberta-base-finetuned-toxic
conjuring92
2022-02-01T18:24:09Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-toxic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-toxic This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2768 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5338 | 1.0 | 313 | 2.3127 | | 2.4482 | 2.0 | 626 | 2.2985 | | 2.4312 | 3.0 | 939 | 2.2411 | ### Framework versions - Transformers 4.16.0 - Pytorch 1.10.0 - Datasets 1.18.1 - Tokenizers 0.10.3
cahya/output
cahya
2022-02-01T15:40:45Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.1822 - Wer: 0.1423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
huggingtweets/scottmorrisonmp
huggingtweets
2022-02-01T11:31:28Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/scottmorrisonmp/1643715083152/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1116081523394891776/AYnEcQnG_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Scott Morrison</div> <div style="text-align: center; font-size: 14px;">@scottmorrisonmp</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Scott Morrison. | Data | Scott Morrison | | --- | --- | | Tweets downloaded | 3243 | | Retweets | 610 | | Short tweets | 34 | | Tweets kept | 2599 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ytdoprx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scottmorrisonmp's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19568gcc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19568gcc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/scottmorrisonmp') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
moussaKam/frugalscore_small_roberta_bert-score
moussaKam
2022-02-01T10:51:08Z
7
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2110.08559", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
moussaKam/frugalscore_medium_bert-base_bert-score
moussaKam
2022-02-01T10:50:43Z
12
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2110.08559", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
moussaKam/frugalscore_small_bert-base_bert-score
moussaKam
2022-02-01T10:50:31Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2110.08559", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
moussaKam/frugalscore_tiny_bert-base_bert-score
moussaKam
2022-02-01T10:50:21Z
4,310
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2110.08559", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
vachonni/wav2vec2-large-xls-r-300m-dansk-CV-80
vachonni
2022-02-01T07:55:36Z
4
2
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-dansk-CV-80 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-dansk-CV-80 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Danish, using the [mozilla-foundation/common_voice_8_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6394 - eval_wer: 0.3682 - eval_runtime: 104.0466 - eval_samples_per_second: 13.359 - eval_steps_per_second: 1.672 - epoch: 21.28 - step: 2000 ## Model description ASR Danish model ## Intended uses & limitations More information needed ## Training and evaluation data Danish subset of [mozilla-foundation/common_voice_8_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
mikeee/model_s
mikeee
2022-02-01T07:41:39Z
0
0
transformers
[ "transformers", "zh", "en", "etc", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh - en - etc tags: - transformers ---
huggingtweets/clamtime-madramami
huggingtweets
2022-02-01T07:09:05Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/clamtime-madramami/1643699341002/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1486460616927858690/H_L_HiW-_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1486839044906618880/x1Q9ED9b_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">clementine!!!! & riley, twink eliminator 🐾🏳️‍⚧️</div> <div style="text-align: center; font-size: 14px;">@clamtime-madramami</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from clementine!!!! & riley, twink eliminator 🐾🏳️‍⚧️. | Data | clementine!!!! | riley, twink eliminator 🐾🏳️‍⚧️ | | --- | --- | --- | | Tweets downloaded | 3239 | 3247 | | Retweets | 340 | 114 | | Short tweets | 872 | 607 | | Tweets kept | 2027 | 2526 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lh3p7v6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clamtime-madramami's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/clamtime-madramami') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jkang/espnet2_an4_asr
jkang
2022-02-01T04:46:54Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:an4", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - an4 license: cc-by-4.0 --- ## ESPnet2 ASR model ### `jkang/espnet2_an4_asr` This model was trained by jaekookang using an4 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 48422215e272812feb9bbac9d7cf4aae6a316bca pip install -e . cd egs2/an4/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_an4_asr ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Tue Feb 1 13:22:35 KST 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1` - Git hash: `48422215e272812feb9bbac9d7cf4aae6a316bca` - Commit date: `Fri Jan 28 17:25:31 2022 +0000` ## asr_train_asr_transformer_raw_en_bpe30_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|773|91.5|6.5|2.1|0.6|9.2|38.5| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|591|88.8|7.4|3.7|0.7|11.8|41.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2565|96.6|1.2|2.2|1.0|4.4|38.5| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|1915|94.0|1.7|4.3|0.4|6.4|41.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2695|96.8|1.1|2.1|0.9|4.2|38.5| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|2015|94.3|1.6|4.1|0.4|6.1|41.0| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_transformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_raw_en_bpe30_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 200 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 64 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe30_sp/train/speech_shape - exp/asr_stats_raw_en_bpe30_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe30_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe30_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_nodev_sp/wav.scp - speech - sound - - dump/raw/train_nodev_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/wav.scp - speech - sound - - dump/raw/train_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 scheduler: warmuplr scheduler_conf: warmup_steps: 2500 token_list: - <blank> - <unk> - ▁ - T - E - O - R - Y - A - H - U - S - I - F - B - L - P - D - G - M - C - V - X - J - K - Z - W - N - Q - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram30/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe30_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Priyajay/xls-r-kn-test
Priyajay
2022-02-01T03:58:52Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "hi", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - HI dataset. It achieves the following results on the evaluation set: - Loss: 26.7866 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
philschmid/bert-mini-sst2-distilled
philschmid
2022-01-31T23:34:03Z
256
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-mini-sst2-distilled results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.856651376146789 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-mini-sst2-distilled This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1792 - Accuracy: 0.8567 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00021185586235152412 - train_batch_size: 1024 - eval_batch_size: 1024 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1552 | 1.0 | 66 | 1.4847 | 0.8349 | | 0.8451 | 2.0 | 132 | 1.3495 | 0.8624 | | 0.5864 | 3.0 | 198 | 1.2257 | 0.8532 | | 0.4553 | 4.0 | 264 | 1.2571 | 0.8544 | | 0.3708 | 5.0 | 330 | 1.2132 | 0.8658 | | 0.3086 | 6.0 | 396 | 1.2370 | 0.8589 | | 0.2701 | 7.0 | 462 | 1.1900 | 0.8635 | | 0.246 | 8.0 | 528 | 1.1792 | 0.8567 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
paintingpeter/distilbert-base-uncased-distilled-clinc
paintingpeter
2022-01-31T23:27:39Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9467741935483871 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2795 - Accuracy: 0.9468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4223 | 1.0 | 318 | 2.5556 | 0.7561 | | 1.9655 | 2.0 | 636 | 1.3075 | 0.8577 | | 1.0041 | 3.0 | 954 | 0.6970 | 0.9165 | | 0.5449 | 4.0 | 1272 | 0.4637 | 0.9339 | | 0.3424 | 5.0 | 1590 | 0.3630 | 0.9397 | | 0.247 | 6.0 | 1908 | 0.3225 | 0.9442 | | 0.1968 | 7.0 | 2226 | 0.2983 | 0.9458 | | 0.1693 | 8.0 | 2544 | 0.2866 | 0.9465 | | 0.1547 | 9.0 | 2862 | 0.2820 | 0.9468 | | 0.1477 | 10.0 | 3180 | 0.2795 | 0.9468 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
gilparmentier/pokemon_gptj_model
gilparmentier
2022-01-31T21:19:06Z
4
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "en", "arxiv:2104.09864", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - pytorch - causal-lm license: apache-2.0 datasets: - The Pile --- # GPT-J 6B ## Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai). ## Training procedure This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. ## Intended Use and Limitations GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Evaluation results <figure> | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more details.</p> <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) Thus, evaluation was not attempted.</p> <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure> ## Citation and Related Information ### BibTeX entry To cite this model: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically): - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues. - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package. - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table. - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo. - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts. - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
glob-asr/wav2vec2-large-xls-r-300m-spanish-small
glob-asr
2022-01-31T20:58:46Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-small This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3596 - Wer: 0.2105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1971 | 0.79 | 400 | 0.2169 | 0.2077 | | 0.2293 | 1.58 | 800 | 0.2507 | 0.2418 | | 0.2065 | 2.37 | 1200 | 0.2703 | 0.2459 | | 0.1842 | 3.16 | 1600 | 0.2716 | 0.2495 | | 0.1634 | 3.95 | 2000 | 0.2695 | 0.2510 | | 0.1443 | 4.74 | 2400 | 0.2754 | 0.2435 | | 0.1345 | 5.53 | 2800 | 0.3119 | 0.2654 | | 0.1267 | 6.32 | 3200 | 0.3154 | 0.2573 | | 0.1237 | 7.11 | 3600 | 0.3251 | 0.2666 | | 0.1118 | 7.91 | 4000 | 0.3139 | 0.2503 | | 0.1051 | 8.7 | 4400 | 0.3286 | 0.2573 | | 0.0964 | 9.49 | 4800 | 0.3348 | 0.2587 | | 0.0946 | 10.28 | 5200 | 0.3357 | 0.2587 | | 0.0897 | 11.07 | 5600 | 0.3408 | 0.2590 | | 0.0812 | 11.86 | 6000 | 0.3380 | 0.2560 | | 0.079 | 12.65 | 6400 | 0.3304 | 0.2415 | | 0.0753 | 13.44 | 6800 | 0.3557 | 0.2540 | | 0.0717 | 14.23 | 7200 | 0.3507 | 0.2519 | | 0.0691 | 15.02 | 7600 | 0.3554 | 0.2587 | | 0.0626 | 15.81 | 8000 | 0.3619 | 0.2520 | | 0.0661 | 16.6 | 8400 | 0.3609 | 0.2564 | | 0.0582 | 17.39 | 8800 | 0.3818 | 0.2520 | | 0.0556 | 18.18 | 9200 | 0.3685 | 0.2410 | | 0.0515 | 18.97 | 9600 | 0.3658 | 0.2367 | | 0.0478 | 19.76 | 10000 | 0.3701 | 0.2413 | | 0.0486 | 20.55 | 10400 | 0.3681 | 0.2371 | | 0.0468 | 21.34 | 10800 | 0.3607 | 0.2370 | | 0.0452 | 22.13 | 11200 | 0.3499 | 0.2286 | | 0.0399 | 22.92 | 11600 | 0.3647 | 0.2282 | | 0.0393 | 23.72 | 12000 | 0.3638 | 0.2255 | | 0.0381 | 24.51 | 12400 | 0.3359 | 0.2202 | | 0.0332 | 25.3 | 12800 | 0.3488 | 0.2177 | | 0.033 | 26.09 | 13200 | 0.3628 | 0.2175 | | 0.0311 | 26.88 | 13600 | 0.3695 | 0.2195 | | 0.0294 | 27.67 | 14000 | 0.3624 | 0.2164 | | 0.0281 | 28.46 | 14400 | 0.3688 | 0.2113 | | 0.0274 | 29.25 | 14800 | 0.3596 | 0.2105 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
arianpasquali/distilbert-base-uncased-finetuned-clinc
arianpasquali
2022-01-31T20:09:00Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9112903225806451 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7751 - Accuracy: 0.9113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.315 | 1.0 | 318 | 3.3087 | 0.74 | | 2.6371 | 2.0 | 636 | 1.8833 | 0.8381 | | 1.5388 | 3.0 | 954 | 1.1547 | 0.8929 | | 1.0076 | 4.0 | 1272 | 0.8590 | 0.9071 | | 0.79 | 5.0 | 1590 | 0.7751 | 0.9113 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.7.1 - Datasets 1.16.1 - Tokenizers 0.10.3
akshaychaudhary/distilbert-base-uncased-finetuned-ner
akshaychaudhary
2022-01-31T18:50:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9988 - Precision: 0.3 - Recall: 0.6 - F1: 0.4 - Accuracy: 0.7870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 84 | 0.8399 | 0.2105 | 0.4 | 0.2759 | 0.75 | | No log | 2.0 | 168 | 0.9664 | 0.3 | 0.6 | 0.4 | 0.7870 | | No log | 3.0 | 252 | 0.9988 | 0.3 | 0.6 | 0.4 | 0.7870 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
shaina/CoQUAD_MPNet
shaina
2022-01-31T18:22:46Z
0
0
null
[ "MPNet", "en", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: - MPNet license: apache-2.0 dataset: - covid-19 --- # CoQUAD_MPNet : MPNet model for COVID-19 ## Introduction It is a state-of-the-art language model for MPNet for Covid-19 dataset with focus on post-covid. ## How to use for Deepset Haystack ```python # Load data from datasets import load_dataset dataset = load_dataset("shaina/covid19") # Haystack pipeline !sudo apt-get install git-lfs !git lfs install !git clone https://huggingface.co/shaina/CoQUAD_MPNet GIT_LFS_SKIP_SMUDGE=1 from haystack.nodes import ElasticsearchRetriever retriever = ElasticsearchRetriever(document_store=document_store) reader = FARMReader(model_name_or_path="/content/drive/MyDrive/CoQUAD_MPNet", use_gpu=True) from haystack.pipelines import ExtractiveQAPipeline pipe = ExtractiveQAPipeline(reader, retriever) prediction = pipe.run( query="What is post-COVID?", params={"Retriever": {"top_k": 10}, "Reader": {"top_k": 5}} ) from pprint import pprint pprint(prediction) ``` --- ## Authors Shaina Raza ---
masapasa/xls-r-ab-test
masapasa
2022-01-31T17:22:19Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 140.0674 - Wer: 1.1193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
ncats/EpiExtract4GARD-v1
ncats
2022-01-31T17:03:33Z
21
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
## Model description **EpiExtract4GARD** is a fine-tuned [BioBERT-base-cased](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER) for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline. #### How to use You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births." See code below for use with Transformers *pipeline* for NER.: ~~~ from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD") tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD") NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple') sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia." sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives." #Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, Õunap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553. #Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821. NER_pipeline(sample) NER_pipeline(sample2) ~~~ Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb): ~~~ import pandas as pd import extract_abs import classify_abs pd.set_option('display.max_colwidth', None) NER_pipeline = extract_abs.init_NER_pipeline() GARD_dict, max_length = extract_abs.load_GARD_diseases() nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model() def search(term,num_results = 50): return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer) a = search(7058) a b = search('Santos Mateus Leal syndrome') b c = search('Fellman syndrome') c d = search('GARD:0009941') d e = search('Homocystinuria') e ~~~ #### Limitations and bias ## Training data It was trained on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description ---------|-------------- O |Outside of a named entity B-LOC | Beginning of a location I-LOC | Inside of a location B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence") I-EPI | Epidemiologic type that is not the beginning token. B-STAT | Beginning of an epidemiologic rate I-STAT | Inside of an epidemiologic rate ### EpiSet Statistics Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Additionally, more recent weakly supervised learning techniques could be used to improve the performance of the model without improving the underlying dataset. ## Training procedure This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters: 4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time. Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml). ## Hold-out validation results metric| entity-level result -|- f1 | 83.8 precision | 83.2 recall | 84.5 ## Test results | Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 | |:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:| | EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 | | | | Location | 0.661 | 0.696 | 0.678 | | | | Epidemiologic Type | 0.854 | 0.911 | 0.882 | | | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 | | | Token-Level | Overall | 0.811 | 0.713 | 0.759 | | | | Location | 0.949 | 0.742 | 0.833 | | | | Epidemiologic Type | 0.9 | 0.917 | 0.908 | | | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 | Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model.
gagan3012/xls-r-300m-pa
gagan3012
2022-01-31T15:27:47Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - pa-IN license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: xls-r-300m-pa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-pa This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: - Loss: 1.0443 - Wer: 0.5715 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 500.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 4.6694 | 19.22 | 500 | 4.0455 | 1.0 | | 3.3907 | 38.45 | 1000 | 3.2836 | 1.0 | | 2.0866 | 57.67 | 1500 | 1.2788 | 0.7715 | | 1.4106 | 76.9 | 2000 | 0.7866 | 0.6891 | | 1.1711 | 96.15 | 2500 | 0.6556 | 0.6272 | | 1.038 | 115.37 | 3000 | 0.6195 | 0.5680 | | 0.8989 | 134.6 | 3500 | 0.6563 | 0.5602 | | 0.8021 | 153.82 | 4000 | 0.6644 | 0.5327 | | 0.7161 | 173.07 | 4500 | 0.6844 | 0.5253 | | 0.6449 | 192.3 | 5000 | 0.7018 | 0.5331 | | 0.5659 | 211.52 | 5500 | 0.7451 | 0.5465 | | 0.5118 | 230.75 | 6000 | 0.7857 | 0.5386 | | 0.4385 | 249.97 | 6500 | 0.8062 | 0.5382 | | 0.3984 | 269.22 | 7000 | 0.8316 | 0.5621 | | 0.3666 | 288.45 | 7500 | 0.8736 | 0.5504 | | 0.3256 | 307.67 | 8000 | 0.9133 | 0.5688 | | 0.289 | 326.9 | 8500 | 0.9556 | 0.5684 | | 0.2663 | 346.15 | 9000 | 0.9344 | 0.5708 | | 0.2445 | 365.37 | 9500 | 0.9472 | 0.5590 | | 0.2289 | 384.6 | 10000 | 0.9713 | 0.5672 | | 0.2048 | 403.82 | 10500 | 0.9978 | 0.5762 | | 0.1857 | 423.07 | 11000 | 1.0230 | 0.5798 | | 0.1751 | 442.3 | 11500 | 1.0409 | 0.5755 | | 0.1688 | 461.52 | 12000 | 1.0445 | 0.5727 | | 0.1633 | 480.75 | 12500 | 1.0484 | 0.5739 | | 0.1488 | 499.97 | 13000 | 1.0443 | 0.5715 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
osanseviero/test_meta
osanseviero
2022-01-31T15:21:09Z
0
0
spacy
[ "spacy", "token-classification", "license:lgpl-lr", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification languages: - fr license: lgpl-lr other-thing: test ---
huggingtweets/_ikeay-ikeay
huggingtweets
2022-01-31T07:45:27Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/_ikeay-ikeay/1643615122837/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1334136134234849280/XgE0O39a_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1438483410503176195/v_ghm6Un_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">池澤あやか / いけあや & いけあや(意識が低い方)</div> <div style="text-align: center; font-size: 14px;">@_ikeay-ikeay</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 池澤あやか / いけあや & いけあや(意識が低い方). | Data | 池澤あやか / いけあや | いけあや(意識が低い方) | | --- | --- | --- | | Tweets downloaded | 3249 | 3248 | | Retweets | 233 | 24 | | Short tweets | 2345 | 2299 | | Tweets kept | 671 | 925 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vm4ts8h/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_ikeay-ikeay's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33elayne) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33elayne/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_ikeay-ikeay') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/eri_razapii-marisakura-miyakomx
huggingtweets
2022-01-31T07:36:10Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/eri_razapii-marisakura-miyakomx/1643614565483/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1463699400405164034/aRY9jlnO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1460131579930755073/ln4j-nWU_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1466279279667277828/VqmxK5gB_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">えりらざぴ | SHE CEO/CCO & 櫻本真理 cotree/CoachEd & 吉澤美弥子🤿Coral Capital</div> <div style="text-align: center; font-size: 14px;">@eri_razapii-marisakura-miyakomx</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from えりらざぴ | SHE CEO/CCO & 櫻本真理 cotree/CoachEd & 吉澤美弥子🤿Coral Capital. | Data | えりらざぴ | SHE CEO/CCO | 櫻本真理 cotree/CoachEd | 吉澤美弥子🤿Coral Capital | | --- | --- | --- | --- | | Tweets downloaded | 3232 | 3205 | 1206 | | Retweets | 1781 | 1564 | 79 | | Short tweets | 959 | 877 | 736 | | Tweets kept | 492 | 764 | 391 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xlu40i1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eri_razapii-marisakura-miyakomx's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22cwqnkv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22cwqnkv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eri_razapii-marisakura-miyakomx') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
TajMahaladeen/pokemon_gptj
TajMahaladeen
2022-01-31T06:12:31Z
9
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
leandrodzp/cbow_uruguayan_news
leandrodzp
2022-01-31T02:38:31Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# Supervised Continous Bag of words model trained with Uruguayan news from Twitter Model trained with Facebook's fasttext library.
huggingtweets/newsfrmhome
huggingtweets
2022-01-30T20:50:52Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/newsfrmhome/1643575848331/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484642358641807369/XYfGxtPs_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">sarah (allegedly)</div> <div style="text-align: center; font-size: 14px;">@newsfrmhome</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from sarah (allegedly). | Data | sarah (allegedly) | | --- | --- | | Tweets downloaded | 3229 | | Retweets | 448 | | Short tweets | 378 | | Tweets kept | 2403 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kr9qjmz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @newsfrmhome's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zjy142t4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zjy142t4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/newsfrmhome') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
osama7/t5-summarization-multinews
osama7
2022-01-30T20:42:51Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
This is a t5-base model trained on the multi_news dataset for abstraction summarization
Kayvane/distilbert-complaints-product
Kayvane
2022-01-30T19:15:13Z
33
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:consumer_complaints", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - consumer_complaints model-index: - name: distilbert-complaints-product results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-complaints-product This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided ## Model description A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint. ## Intended uses & limitations This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation: - **Infrastructure:** Terraform - **ML Ops:** HuggingFace (Datasets, Hub, Transformers) - **Ml Explainability:** SHAP - **Cloud:** AWS - Model Hosting: Lambda - DB Backend: DynamoDB - Orchestration: Step-Functions - UI Hosting: EC2 - Routing: API Gateway - **UI:** Budibase ## Training and evaluation data consumer_complaints dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small
tomascufaro
2022-01-30T17:23:59Z
14
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-small This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3763 - Wer: 0.1791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2277 | 0.26 | 400 | 0.2601 | 0.2291 | | 0.2932 | 0.53 | 800 | 0.2950 | 0.2670 | | 0.3019 | 0.79 | 1200 | 0.3247 | 0.2766 | | 0.2987 | 1.05 | 1600 | 0.3031 | 0.2606 | | 0.261 | 1.32 | 2000 | 0.2994 | 0.2620 | | 0.2651 | 1.58 | 2400 | 0.3134 | 0.2700 | | 0.264 | 1.85 | 2800 | 0.3016 | 0.2641 | | 0.2475 | 2.11 | 3200 | 0.3135 | 0.2661 | | 0.2269 | 2.37 | 3600 | 0.3029 | 0.2562 | | 0.2389 | 2.64 | 4000 | 0.3035 | 0.2549 | | 0.2319 | 2.9 | 4400 | 0.3022 | 0.2551 | | 0.2123 | 3.16 | 4800 | 0.3256 | 0.2638 | | 0.2094 | 3.43 | 5200 | 0.3227 | 0.2712 | | 0.2121 | 3.69 | 5600 | 0.3085 | 0.2596 | | 0.207 | 3.96 | 6000 | 0.3041 | 0.2597 | | 0.1809 | 4.22 | 6400 | 0.3122 | 0.2524 | | 0.1846 | 4.48 | 6800 | 0.3254 | 0.2579 | | 0.1885 | 4.75 | 7200 | 0.2958 | 0.2437 | | 0.1923 | 5.01 | 7600 | 0.3136 | 0.2502 | | 0.1626 | 5.27 | 8000 | 0.3059 | 0.2488 | | 0.1704 | 5.54 | 8400 | 0.3082 | 0.2515 | | 0.1674 | 5.8 | 8800 | 0.3196 | 0.2509 | | 0.1691 | 6.06 | 9200 | 0.3193 | 0.25 | | 0.1499 | 6.33 | 9600 | 0.3529 | 0.2635 | | 0.1568 | 6.59 | 10000 | 0.3241 | 0.2481 | | 0.1538 | 6.86 | 10400 | 0.3354 | 0.2476 | | 0.1503 | 7.12 | 10800 | 0.3180 | 0.2402 | | 0.136 | 7.38 | 11200 | 0.3230 | 0.2397 | | 0.1413 | 7.65 | 11600 | 0.3178 | 0.2451 | | 0.147 | 7.91 | 12000 | 0.3170 | 0.2389 | | 0.1341 | 8.17 | 12400 | 0.3380 | 0.2501 | | 0.1329 | 8.44 | 12800 | 0.3265 | 0.2414 | | 0.1314 | 8.7 | 13200 | 0.3281 | 0.2482 | | 0.1312 | 8.97 | 13600 | 0.3259 | 0.2539 | | 0.12 | 9.23 | 14000 | 0.3291 | 0.2424 | | 0.1193 | 9.49 | 14400 | 0.3302 | 0.2412 | | 0.1189 | 9.76 | 14800 | 0.3376 | 0.2407 | | 0.1217 | 10.02 | 15200 | 0.3334 | 0.2400 | | 0.1118 | 10.28 | 15600 | 0.3359 | 0.2368 | | 0.1139 | 10.55 | 16000 | 0.3239 | 0.2335 | | 0.1106 | 10.81 | 16400 | 0.3374 | 0.2352 | | 0.1081 | 11.07 | 16800 | 0.3585 | 0.2434 | | 0.1063 | 11.34 | 17200 | 0.3639 | 0.2472 | | 0.1041 | 11.6 | 17600 | 0.3399 | 0.2423 | | 0.1062 | 11.87 | 18000 | 0.3410 | 0.2388 | | 0.1012 | 12.13 | 18400 | 0.3597 | 0.2413 | | 0.0953 | 12.39 | 18800 | 0.3440 | 0.2296 | | 0.097 | 12.66 | 19200 | 0.3440 | 0.2269 | | 0.0968 | 12.92 | 19600 | 0.3498 | 0.2333 | | 0.0902 | 13.18 | 20000 | 0.3471 | 0.2290 | | 0.0868 | 13.45 | 20400 | 0.3462 | 0.2266 | | 0.0892 | 13.71 | 20800 | 0.3373 | 0.2227 | | 0.0902 | 13.97 | 21200 | 0.3377 | 0.2240 | | 0.0846 | 14.24 | 21600 | 0.3484 | 0.2237 | | 0.0839 | 14.5 | 22000 | 0.3706 | 0.2260 | | 0.0834 | 14.77 | 22400 | 0.3430 | 0.2268 | | 0.0841 | 15.03 | 22800 | 0.3489 | 0.2259 | | 0.076 | 15.29 | 23200 | 0.3626 | 0.2281 | | 0.0771 | 15.56 | 23600 | 0.3624 | 0.2268 | | 0.0773 | 15.82 | 24000 | 0.3440 | 0.2252 | | 0.0759 | 16.08 | 24400 | 0.3532 | 0.2170 | | 0.0745 | 16.35 | 24800 | 0.3686 | 0.2188 | | 0.0713 | 16.61 | 25200 | 0.3691 | 0.2195 | | 0.0718 | 16.88 | 25600 | 0.3470 | 0.2108 | | 0.0685 | 17.14 | 26000 | 0.3756 | 0.2179 | | 0.0689 | 17.4 | 26400 | 0.3542 | 0.2149 | | 0.0671 | 17.67 | 26800 | 0.3461 | 0.2165 | | 0.0737 | 17.93 | 27200 | 0.3473 | 0.2238 | | 0.0669 | 18.19 | 27600 | 0.3441 | 0.2138 | | 0.0629 | 18.46 | 28000 | 0.3721 | 0.2155 | | 0.0632 | 18.72 | 28400 | 0.3667 | 0.2126 | | 0.0647 | 18.98 | 28800 | 0.3579 | 0.2097 | | 0.0603 | 19.25 | 29200 | 0.3670 | 0.2130 | | 0.0604 | 19.51 | 29600 | 0.3750 | 0.2142 | | 0.0619 | 19.78 | 30000 | 0.3804 | 0.2160 | | 0.0603 | 20.04 | 30400 | 0.3764 | 0.2124 | | 0.0577 | 20.3 | 30800 | 0.3858 | 0.2097 | | 0.0583 | 20.57 | 31200 | 0.3520 | 0.2089 | | 0.0561 | 20.83 | 31600 | 0.3615 | 0.2079 | | 0.0545 | 21.09 | 32000 | 0.3824 | 0.2032 | | 0.0525 | 21.36 | 32400 | 0.3858 | 0.2091 | | 0.0524 | 21.62 | 32800 | 0.3956 | 0.2099 | | 0.0527 | 21.89 | 33200 | 0.3667 | 0.2025 | | 0.0514 | 22.15 | 33600 | 0.3708 | 0.2032 | | 0.0506 | 22.41 | 34000 | 0.3815 | 0.2053 | | 0.0478 | 22.68 | 34400 | 0.3671 | 0.2007 | | 0.049 | 22.94 | 34800 | 0.3758 | 0.2003 | | 0.0477 | 23.2 | 35200 | 0.3786 | 0.2014 | | 0.045 | 23.47 | 35600 | 0.3732 | 0.1998 | | 0.0426 | 23.73 | 36000 | 0.3737 | 0.2010 | | 0.0444 | 23.99 | 36400 | 0.3600 | 0.1990 | | 0.0433 | 24.26 | 36800 | 0.3689 | 0.1976 | | 0.0442 | 24.52 | 37200 | 0.3787 | 0.1968 | | 0.0419 | 24.79 | 37600 | 0.3652 | 0.1961 | | 0.042 | 25.05 | 38000 | 0.3820 | 0.1964 | | 0.0419 | 25.31 | 38400 | 0.3786 | 0.1919 | | 0.0376 | 25.58 | 38800 | 0.3842 | 0.1934 | | 0.0385 | 25.84 | 39200 | 0.3767 | 0.1900 | | 0.0396 | 26.1 | 39600 | 0.3688 | 0.1888 | | 0.0371 | 26.37 | 40000 | 0.3815 | 0.1894 | | 0.0363 | 26.63 | 40400 | 0.3748 | 0.1878 | | 0.0377 | 26.9 | 40800 | 0.3713 | 0.1852 | | 0.0352 | 27.16 | 41200 | 0.3734 | 0.1851 | | 0.0355 | 27.42 | 41600 | 0.3776 | 0.1874 | | 0.0333 | 27.69 | 42000 | 0.3867 | 0.1841 | | 0.0348 | 27.95 | 42400 | 0.3823 | 0.1839 | | 0.0329 | 28.21 | 42800 | 0.3795 | 0.1822 | | 0.0325 | 28.48 | 43200 | 0.3711 | 0.1813 | | 0.0328 | 28.74 | 43600 | 0.3721 | 0.1781 | | 0.0312 | 29.0 | 44000 | 0.3803 | 0.1816 | | 0.0318 | 29.27 | 44400 | 0.3758 | 0.1794 | | 0.0302 | 29.53 | 44800 | 0.3792 | 0.1784 | | 0.0339 | 29.8 | 45200 | 0.3763 | 0.1791 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
anuragshas/wav2vec2-xls-r-1b-hi-cv8
anuragshas
2022-01-30T15:20:16Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hi", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.6780 - Wer: 0.3670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.514 | 2.07 | 400 | 1.4589 | 0.8531 | | 1.4289 | 4.15 | 800 | 0.8940 | 0.6475 | | 1.276 | 6.22 | 1200 | 0.7743 | 0.6089 | | 1.2213 | 8.29 | 1600 | 0.6919 | 0.4973 | | 1.1522 | 10.36 | 2000 | 0.6635 | 0.4588 | | 1.0914 | 12.44 | 2400 | 0.6839 | 0.4586 | | 1.0499 | 14.51 | 2800 | 0.7151 | 0.4467 | | 1.0238 | 16.58 | 3200 | 0.6824 | 0.4436 | | 0.9963 | 18.65 | 3600 | 0.6872 | 0.4437 | | 0.9728 | 20.73 | 4000 | 0.7047 | 0.4244 | | 0.9373 | 22.8 | 4400 | 0.6569 | 0.4189 | | 0.9028 | 24.87 | 4800 | 0.6623 | 0.4094 | | 0.8759 | 26.94 | 5200 | 0.6723 | 0.4152 | | 0.8824 | 29.02 | 5600 | 0.6467 | 0.4017 | | 0.8371 | 31.09 | 6000 | 0.6911 | 0.4080 | | 0.8205 | 33.16 | 6400 | 0.7145 | 0.4063 | | 0.7837 | 35.23 | 6800 | 0.7037 | 0.3930 | | 0.7708 | 37.31 | 7200 | 0.6925 | 0.3840 | | 0.7359 | 39.38 | 7600 | 0.7034 | 0.3829 | | 0.7153 | 41.45 | 8000 | 0.7030 | 0.3794 | | 0.7127 | 43.52 | 8400 | 0.6823 | 0.3761 | | 0.6884 | 45.6 | 8800 | 0.6854 | 0.3711 | | 0.6835 | 47.67 | 9200 | 0.6723 | 0.3665 | | 0.6703 | 49.74 | 9600 | 0.6773 | 0.3668 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
imvladikon/charbert-roberta-wiki
imvladikon
2022-01-30T11:37:26Z
10
1
transformers
[ "transformers", "pytorch", "language model", "en", "dataset:wikipedia", "arxiv:2011.01513", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en tags: - language model datasets: - wikipedia --- pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT) ``` @misc{ma2020charbert, title={CharBERT: Character-aware Pre-trained Language Model}, author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu}, year={2020}, eprint={2011.01513}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```