modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 12:28:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 12:27:45
card
stringlengths
11
1.01M
philschmid/bert-mini-sst2-distilled
philschmid
2022-01-31T23:34:03Z
256
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-mini-sst2-distilled results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.856651376146789 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-mini-sst2-distilled This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1792 - Accuracy: 0.8567 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00021185586235152412 - train_batch_size: 1024 - eval_batch_size: 1024 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1552 | 1.0 | 66 | 1.4847 | 0.8349 | | 0.8451 | 2.0 | 132 | 1.3495 | 0.8624 | | 0.5864 | 3.0 | 198 | 1.2257 | 0.8532 | | 0.4553 | 4.0 | 264 | 1.2571 | 0.8544 | | 0.3708 | 5.0 | 330 | 1.2132 | 0.8658 | | 0.3086 | 6.0 | 396 | 1.2370 | 0.8589 | | 0.2701 | 7.0 | 462 | 1.1900 | 0.8635 | | 0.246 | 8.0 | 528 | 1.1792 | 0.8567 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
philschmid/tiny-bert-sst2-distilled
philschmid
2022-01-31T18:50:41Z
17,185
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: tiny-bert-sst2-distilled results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8325688073394495 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-bert-sst2-distilled This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.7305 - Accuracy: 0.8326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0007199555649276667 - train_batch_size: 1024 - eval_batch_size: 1024 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.77 | 1.0 | 66 | 1.6939 | 0.8165 | | 0.729 | 2.0 | 132 | 1.5090 | 0.8326 | | 0.5242 | 3.0 | 198 | 1.5369 | 0.8257 | | 0.4017 | 4.0 | 264 | 1.7025 | 0.8326 | | 0.327 | 5.0 | 330 | 1.6743 | 0.8245 | | 0.2749 | 6.0 | 396 | 1.7305 | 0.8337 | | 0.2521 | 7.0 | 462 | 1.7305 | 0.8326 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
masapasa/xls-r-ab-test
masapasa
2022-01-31T17:22:19Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 140.0674 - Wer: 1.1193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
anton-l/wav2vec2-xls-r-common_voice-tr-ft-stream
anton-l
2022-01-31T17:19:19Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer model-index: - name: wav2vec2-xls-r-common_voice-tr-ft-stream results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-common_voice-tr-ft-stream This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.3519 - Wer: 0.2927 - Cer: 0.0694 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.6768 | 9.01 | 500 | 0.4220 | 0.5143 | 0.1235 | | 0.3801 | 19.01 | 1000 | 0.3303 | 0.4403 | 0.1055 | | 0.3616 | 29.0 | 1500 | 0.3540 | 0.3716 | 0.0878 | | 0.2334 | 39.0 | 2000 | 0.3666 | 0.3671 | 0.0842 | | 0.3141 | 49.0 | 2500 | 0.3407 | 0.3373 | 0.0819 | | 0.1926 | 58.01 | 3000 | 0.3886 | 0.3520 | 0.0867 | | 0.1372 | 68.01 | 3500 | 0.3415 | 0.3189 | 0.0743 | | 0.091 | 78.0 | 4000 | 0.3750 | 0.3164 | 0.0757 | | 0.0893 | 88.0 | 4500 | 0.3559 | 0.2968 | 0.0712 | | 0.095 | 98.0 | 5000 | 0.3519 | 0.2927 | 0.0694 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
peter-explosion-ai/en_pipeline
peter-explosion-ai
2022-01-31T17:04:42Z
5
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_pipeline results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `POSITIVE`, `NEGATIVE` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 55.70 | | `CATS_MICRO_P` | 58.65 | | `CATS_MICRO_R` | 58.65 | | `CATS_MICRO_F` | 58.65 | | `CATS_MACRO_P` | 61.88 | | `CATS_MACRO_R` | 58.69 | | `CATS_MACRO_F` | 55.70 | | `CATS_MACRO_AUC` | 63.53 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 3.74 |
gagan3012/xls-r-300m-pa
gagan3012
2022-01-31T15:27:47Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - pa-IN license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: xls-r-300m-pa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-pa This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: - Loss: 1.0443 - Wer: 0.5715 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 500.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 4.6694 | 19.22 | 500 | 4.0455 | 1.0 | | 3.3907 | 38.45 | 1000 | 3.2836 | 1.0 | | 2.0866 | 57.67 | 1500 | 1.2788 | 0.7715 | | 1.4106 | 76.9 | 2000 | 0.7866 | 0.6891 | | 1.1711 | 96.15 | 2500 | 0.6556 | 0.6272 | | 1.038 | 115.37 | 3000 | 0.6195 | 0.5680 | | 0.8989 | 134.6 | 3500 | 0.6563 | 0.5602 | | 0.8021 | 153.82 | 4000 | 0.6644 | 0.5327 | | 0.7161 | 173.07 | 4500 | 0.6844 | 0.5253 | | 0.6449 | 192.3 | 5000 | 0.7018 | 0.5331 | | 0.5659 | 211.52 | 5500 | 0.7451 | 0.5465 | | 0.5118 | 230.75 | 6000 | 0.7857 | 0.5386 | | 0.4385 | 249.97 | 6500 | 0.8062 | 0.5382 | | 0.3984 | 269.22 | 7000 | 0.8316 | 0.5621 | | 0.3666 | 288.45 | 7500 | 0.8736 | 0.5504 | | 0.3256 | 307.67 | 8000 | 0.9133 | 0.5688 | | 0.289 | 326.9 | 8500 | 0.9556 | 0.5684 | | 0.2663 | 346.15 | 9000 | 0.9344 | 0.5708 | | 0.2445 | 365.37 | 9500 | 0.9472 | 0.5590 | | 0.2289 | 384.6 | 10000 | 0.9713 | 0.5672 | | 0.2048 | 403.82 | 10500 | 0.9978 | 0.5762 | | 0.1857 | 423.07 | 11000 | 1.0230 | 0.5798 | | 0.1751 | 442.3 | 11500 | 1.0409 | 0.5755 | | 0.1688 | 461.52 | 12000 | 1.0445 | 0.5727 | | 0.1633 | 480.75 | 12500 | 1.0484 | 0.5739 | | 0.1488 | 499.97 | 13000 | 1.0443 | 0.5715 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
anton-l/wav2vec2-tokenizer-turkish
anton-l
2022-01-31T11:37:43Z
0
0
null
[ "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: cc0-1.0 --- This is a standalone Turkish Wav2Vec2 tokenizer config intended for use with `run_speech_recognition_ctc_streaming.py`
huggingtweets/tks
huggingtweets
2022-01-31T10:20:15Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/tks/1643624411056/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1044664291050344449/vKKJxtBF_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">高須正和@NT深圳コミュニティ/TAKASU@NT Shenzhen</div> <div style="text-align: center; font-size: 14px;">@tks</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 高須正和@NT深圳コミュニティ/TAKASU@NT Shenzhen. | Data | 高須正和@NT深圳コミュニティ/TAKASU@NT Shenzhen | | --- | --- | | Tweets downloaded | 3248 | | Retweets | 1831 | | Short tweets | 825 | | Tweets kept | 592 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lg0mgsp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tks's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/j1ak5d5p) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/j1ak5d5p/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tks') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/goando-tsuchinao83-za09313103
huggingtweets
2022-01-31T09:56:33Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/goando-tsuchinao83-za09313103/1643622988627/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/715665333218979842/fLLzpFee_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1145832571214815232/KYNcOP04_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1281544202627674112/zglo72WL_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch</div> <div style="text-align: center; font-size: 14px;">@goando-tsuchinao83-za09313103</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch. | Data | 土屋尚史 / Goodpatch | Go Ando / PREDUCTS / THE GUILD | shun nozaki / Goodpatch | | --- | --- | --- | --- | | Tweets downloaded | 3236 | 3250 | 798 | | Retweets | 1577 | 97 | 34 | | Short tweets | 914 | 1729 | 458 | | Tweets kept | 745 | 1424 | 306 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31bsh75f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando-tsuchinao83-za09313103's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/goando-tsuchinao83-za09313103') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/eri_razapii-marisakura-miyakomx
huggingtweets
2022-01-31T07:36:10Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/eri_razapii-marisakura-miyakomx/1643614565483/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1463699400405164034/aRY9jlnO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1460131579930755073/ln4j-nWU_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1466279279667277828/VqmxK5gB_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">えりらざぴ | SHE CEO/CCO & 櫻本真理 cotree/CoachEd & 吉澤美弥子🤿Coral Capital</div> <div style="text-align: center; font-size: 14px;">@eri_razapii-marisakura-miyakomx</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from えりらざぴ | SHE CEO/CCO & 櫻本真理 cotree/CoachEd & 吉澤美弥子🤿Coral Capital. | Data | えりらざぴ | SHE CEO/CCO | 櫻本真理 cotree/CoachEd | 吉澤美弥子🤿Coral Capital | | --- | --- | --- | --- | | Tweets downloaded | 3232 | 3205 | 1206 | | Retweets | 1781 | 1564 | 79 | | Short tweets | 959 | 877 | 736 | | Tweets kept | 492 | 764 | 391 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xlu40i1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eri_razapii-marisakura-miyakomx's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22cwqnkv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22cwqnkv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eri_razapii-marisakura-miyakomx') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
TajMahaladeen/pokemon_gptj
TajMahaladeen
2022-01-31T06:12:31Z
9
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
NbAiLab/xls-r-1b-npsc
NbAiLab
2022-01-31T04:33:39Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 ---
gabrieljg/wav2vec2-common_voice-es-demo
gabrieljg
2022-01-30T21:38:32Z
29
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "es", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-es-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-es-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - ES dataset. It achieves the following results on the evaluation set: - Loss: 0.1788 - Wer: 1.0239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 0.02 | 100 | 6.6465 | 1.0 | | No log | 0.04 | 200 | 3.0150 | 1.0 | | No log | 0.05 | 300 | 2.8622 | 1.0003 | | No log | 0.07 | 400 | 0.9506 | 0.9771 | | 5.1598 | 0.09 | 500 | 0.4883 | 1.0009 | | 5.1598 | 0.11 | 600 | 0.3893 | 1.0203 | | 5.1598 | 0.13 | 700 | 0.3417 | 1.0283 | | 5.1598 | 0.14 | 800 | 0.3352 | 1.0335 | | 5.1598 | 0.16 | 900 | 0.2987 | 1.0168 | | 0.3671 | 0.18 | 1000 | 0.2921 | 1.0159 | | 0.3671 | 0.2 | 1100 | 0.2770 | 1.0096 | | 0.3671 | 0.22 | 1200 | 0.2790 | 1.0398 | | 0.3671 | 0.24 | 1300 | 0.2659 | 1.0190 | | 0.3671 | 0.25 | 1400 | 0.2657 | 1.0528 | | 0.289 | 0.27 | 1500 | 0.2556 | 1.0301 | | 0.289 | 0.29 | 1600 | 0.2514 | 1.0193 | | 0.289 | 0.31 | 1700 | 0.2708 | 1.0699 | | 0.289 | 0.33 | 1800 | 0.2455 | 1.0723 | | 0.289 | 0.34 | 1900 | 0.2456 | 1.0100 | | 0.271 | 0.36 | 2000 | 0.2338 | 1.0533 | | 0.271 | 0.38 | 2100 | 0.2479 | 1.0128 | | 0.271 | 0.4 | 2200 | 0.2483 | 1.0386 | | 0.271 | 0.42 | 2300 | 0.2436 | 1.0528 | | 0.271 | 0.43 | 2400 | 0.2382 | 1.0476 | | 0.2634 | 0.45 | 2500 | 0.2329 | 1.0680 | | 0.2634 | 0.47 | 2600 | 0.2433 | 1.0581 | | 0.2634 | 0.49 | 2700 | 0.2354 | 1.0641 | | 0.2634 | 0.51 | 2800 | 0.2318 | 1.0504 | | 0.2634 | 0.52 | 2900 | 0.2325 | 1.0500 | | 0.2522 | 0.54 | 3000 | 0.2344 | 1.0380 | | 0.2522 | 0.56 | 3100 | 0.2244 | 1.0663 | | 0.2522 | 0.58 | 3200 | 0.2340 | 1.0647 | | 0.2522 | 0.6 | 3300 | 0.2288 | 1.0538 | | 0.2522 | 0.61 | 3400 | 0.2212 | 1.0614 | | 0.2468 | 0.63 | 3500 | 0.2487 | 1.0557 | | 0.2468 | 0.65 | 3600 | 0.2330 | 1.0510 | | 0.2468 | 0.67 | 3700 | 0.2308 | 1.0506 | | 0.2468 | 0.69 | 3800 | 0.2320 | 1.0451 | | 0.2468 | 0.71 | 3900 | 0.2261 | 1.0701 | | 0.2505 | 0.72 | 4000 | 0.2281 | 1.0713 | | 0.2505 | 0.74 | 4100 | 0.2277 | 1.0741 | | 0.2505 | 0.76 | 4200 | 0.2253 | 1.0814 | | 0.2505 | 0.78 | 4300 | 0.2215 | 1.0437 | | 0.2505 | 0.8 | 4400 | 0.2220 | 1.0557 | | 0.2434 | 0.81 | 4500 | 0.2184 | 1.0533 | | 0.2434 | 0.83 | 4600 | 0.2222 | 1.0819 | | 0.2434 | 0.85 | 4700 | 0.2162 | 1.0238 | | 0.2434 | 0.87 | 4800 | 0.2132 | 1.0457 | | 0.2434 | 0.89 | 4900 | 0.2068 | 1.0611 | | 0.2347 | 0.9 | 5000 | 0.2166 | 1.0332 | | 0.2347 | 0.92 | 5100 | 0.2087 | 1.0433 | | 0.2347 | 0.94 | 5200 | 0.2100 | 1.0292 | | 0.2347 | 0.96 | 5300 | 0.2067 | 1.0734 | | 0.2347 | 0.98 | 5400 | 0.2148 | 1.0279 | | 0.2333 | 0.99 | 5500 | 0.2125 | 1.0277 | | 0.2333 | 1.01 | 5600 | 0.2054 | 1.0453 | | 0.2333 | 1.03 | 5700 | 0.2091 | 1.0557 | | 0.2333 | 1.05 | 5800 | 0.2086 | 1.0239 | | 0.2333 | 1.07 | 5900 | 0.2051 | 1.0645 | | 0.2087 | 1.09 | 6000 | 0.2103 | 1.0240 | | 0.2087 | 1.1 | 6100 | 0.2145 | 1.0197 | | 0.2087 | 1.12 | 6200 | 0.2136 | 1.0248 | | 0.2087 | 1.14 | 6300 | 0.2045 | 1.0443 | | 0.2087 | 1.16 | 6400 | 0.2089 | 1.0397 | | 0.2013 | 1.18 | 6500 | 0.2012 | 1.0654 | | 0.2013 | 1.19 | 6600 | 0.2054 | 1.0414 | | 0.2013 | 1.21 | 6700 | 0.2081 | 1.0632 | | 0.2013 | 1.23 | 6800 | 0.2104 | 1.0190 | | 0.2013 | 1.25 | 6900 | 0.2045 | 1.0813 | | 0.2092 | 1.27 | 7000 | 0.2096 | 1.0751 | | 0.2092 | 1.28 | 7100 | 0.2103 | 1.0328 | | 0.2092 | 1.3 | 7200 | 0.2044 | 1.0011 | | 0.2092 | 1.32 | 7300 | 0.2089 | 1.0260 | | 0.2092 | 1.34 | 7400 | 0.2063 | 1.0551 | | 0.2076 | 1.36 | 7500 | 0.2029 | 1.0075 | | 0.2076 | 1.37 | 7600 | 0.2040 | 1.0528 | | 0.2076 | 1.39 | 7700 | 0.2075 | 1.0398 | | 0.2076 | 1.41 | 7800 | 0.2023 | 1.0231 | | 0.2076 | 1.43 | 7900 | 0.2049 | 1.0318 | | 0.2028 | 1.45 | 8000 | 0.2072 | 1.0763 | | 0.2028 | 1.47 | 8100 | 0.2075 | 1.0762 | | 0.2028 | 1.48 | 8200 | 0.2052 | 1.0838 | | 0.2028 | 1.5 | 8300 | 0.2053 | 1.0407 | | 0.2028 | 1.52 | 8400 | 0.2066 | 1.0266 | | 0.2025 | 1.54 | 8500 | 0.2037 | 1.0628 | | 0.2025 | 1.56 | 8600 | 0.2010 | 1.0351 | | 0.2025 | 1.57 | 8700 | 0.1961 | 1.0812 | | 0.2025 | 1.59 | 8800 | 0.1963 | 1.0868 | | 0.2025 | 1.61 | 8900 | 0.2022 | 1.0710 | | 0.1997 | 1.63 | 9000 | 0.2051 | 1.0764 | | 0.1997 | 1.65 | 9100 | 0.1987 | 1.0581 | | 0.1997 | 1.66 | 9200 | 0.2051 | 1.0611 | | 0.1997 | 1.68 | 9300 | 0.1999 | 1.0808 | | 0.1997 | 1.7 | 9400 | 0.1972 | 1.0703 | | 0.1983 | 1.72 | 9500 | 0.1961 | 1.0584 | | 0.1983 | 1.74 | 9600 | 0.2031 | 1.0938 | | 0.1983 | 1.75 | 9700 | 0.2019 | 1.0891 | | 0.1983 | 1.77 | 9800 | 0.2006 | 1.0542 | | 0.1983 | 1.79 | 9900 | 0.1925 | 1.0627 | | 0.1961 | 1.81 | 10000 | 0.1976 | 1.0751 | | 0.1961 | 1.83 | 10100 | 0.2051 | 1.0611 | | 0.1961 | 1.85 | 10200 | 0.2037 | 1.0656 | | 0.1961 | 1.86 | 10300 | 0.2025 | 1.0291 | | 0.1961 | 1.88 | 10400 | 0.1977 | 1.0525 | | 0.2025 | 1.9 | 10500 | 0.2030 | 1.0670 | | 0.2025 | 1.92 | 10600 | 0.1980 | 1.0765 | | 0.2025 | 1.94 | 10700 | 0.1975 | 1.0254 | | 0.2025 | 1.95 | 10800 | 0.1986 | 1.0636 | | 0.2025 | 1.97 | 10900 | 0.1956 | 1.0352 | | 0.2025 | 1.99 | 11000 | 0.1954 | 1.0265 | | 0.2025 | 2.01 | 11100 | 0.1957 | 1.0752 | | 0.2025 | 2.03 | 11200 | 0.1943 | 1.0784 | | 0.2025 | 2.04 | 11300 | 0.1898 | 1.0341 | | 0.2025 | 2.06 | 11400 | 0.1921 | 1.0301 | | 0.1805 | 2.08 | 11500 | 0.1910 | 1.0230 | | 0.1805 | 2.1 | 11600 | 0.1961 | 1.0203 | | 0.1805 | 2.12 | 11700 | 0.1973 | 1.0776 | | 0.1805 | 2.13 | 11800 | 0.1876 | 1.0788 | | 0.1805 | 2.15 | 11900 | 0.1934 | 1.0251 | | 0.177 | 2.17 | 12000 | 0.1967 | 1.0340 | | 0.177 | 2.19 | 12100 | 0.1932 | 1.0131 | | 0.177 | 2.21 | 12200 | 0.1926 | 1.0078 | | 0.177 | 2.23 | 12300 | 0.1947 | 0.9991 | | 0.177 | 2.24 | 12400 | 0.1914 | 1.0213 | | 0.1782 | 2.26 | 12500 | 0.1962 | 0.9882 | | 0.1782 | 2.28 | 12600 | 0.1960 | 1.0562 | | 0.1782 | 2.3 | 12700 | 0.2006 | 1.0401 | | 0.1782 | 2.32 | 12800 | 0.1950 | 1.0688 | | 0.1782 | 2.33 | 12900 | 0.1920 | 1.0435 | | 0.1796 | 2.35 | 13000 | 0.1926 | 1.0667 | | 0.1796 | 2.37 | 13100 | 0.1949 | 1.0859 | | 0.1796 | 2.39 | 13200 | 0.1932 | 1.0670 | | 0.1796 | 2.41 | 13300 | 0.1882 | 1.0663 | | 0.1796 | 2.42 | 13400 | 0.1877 | 1.0760 | | 0.1775 | 2.44 | 13500 | 0.1893 | 1.0859 | | 0.1775 | 2.46 | 13600 | 0.1936 | 1.0702 | | 0.1775 | 2.48 | 13700 | 0.1871 | 1.0414 | | 0.1775 | 2.5 | 13800 | 0.1917 | 1.0430 | | 0.1775 | 2.51 | 13900 | 0.1922 | 1.0422 | | 0.1778 | 2.53 | 14000 | 0.1875 | 1.0585 | | 0.1778 | 2.55 | 14100 | 0.1876 | 1.0603 | | 0.1778 | 2.57 | 14200 | 0.1888 | 1.0628 | | 0.1778 | 2.59 | 14300 | 0.1948 | 1.0782 | | 0.1778 | 2.6 | 14400 | 0.1942 | 1.0695 | | 0.1784 | 2.62 | 14500 | 0.1842 | 1.0863 | | 0.1784 | 2.64 | 14600 | 0.1850 | 1.0543 | | 0.1784 | 2.66 | 14700 | 0.1824 | 1.0683 | | 0.1784 | 2.68 | 14800 | 0.1888 | 1.0693 | | 0.1784 | 2.7 | 14900 | 0.1871 | 1.0175 | | 0.1753 | 2.71 | 15000 | 0.1889 | 1.0549 | | 0.1753 | 2.73 | 15100 | 0.1865 | 1.0544 | | 0.1753 | 2.75 | 15200 | 0.1918 | 1.0726 | | 0.1753 | 2.77 | 15300 | 0.1964 | 1.0915 | | 0.1753 | 2.79 | 15400 | 0.1900 | 1.0610 | | 0.1768 | 2.8 | 15500 | 0.1894 | 1.0763 | | 0.1768 | 2.82 | 15600 | 0.1882 | 1.0548 | | 0.1768 | 2.84 | 15700 | 0.1861 | 1.0902 | | 0.1768 | 2.86 | 15800 | 0.1860 | 1.0551 | | 0.1768 | 2.88 | 15900 | 0.1879 | 1.0581 | | 0.1761 | 2.89 | 16000 | 0.1899 | 1.0544 | | 0.1761 | 2.91 | 16100 | 0.1860 | 1.0530 | | 0.1761 | 2.93 | 16200 | 0.1894 | 1.0596 | | 0.1761 | 2.95 | 16300 | 0.1835 | 1.0394 | | 0.1761 | 2.97 | 16400 | 0.1852 | 1.0445 | | 0.1754 | 2.98 | 16500 | 0.1847 | 1.0390 | | 0.1754 | 3.0 | 16600 | 0.1828 | 1.0440 | | 0.1754 | 3.02 | 16700 | 0.1869 | 1.0560 | | 0.1754 | 3.04 | 16800 | 0.1882 | 1.0573 | | 0.1754 | 3.06 | 16900 | 0.1912 | 1.0600 | | 0.1592 | 3.08 | 17000 | 0.1921 | 1.0529 | | 0.1592 | 3.09 | 17100 | 0.1881 | 1.0175 | | 0.1592 | 3.11 | 17200 | 0.1891 | 1.0654 | | 0.1592 | 3.13 | 17300 | 0.1889 | 1.0687 | | 0.1592 | 3.15 | 17400 | 0.1916 | 1.0642 | | 0.1556 | 3.17 | 17500 | 0.1850 | 1.0295 | | 0.1556 | 3.18 | 17600 | 0.1875 | 1.0273 | | 0.1556 | 3.2 | 17700 | 0.1894 | 1.0051 | | 0.1556 | 3.22 | 17800 | 0.1870 | 1.0462 | | 0.1556 | 3.24 | 17900 | 0.1831 | 1.0308 | | 0.1557 | 3.26 | 18000 | 0.1878 | 1.0603 | | 0.1557 | 3.27 | 18100 | 0.1850 | 1.0566 | | 0.1557 | 3.29 | 18200 | 0.1843 | 1.0629 | | 0.1557 | 3.31 | 18300 | 0.1886 | 1.0378 | | 0.1557 | 3.33 | 18400 | 0.1892 | 1.0381 | | 0.159 | 3.35 | 18500 | 0.1942 | 1.0519 | | 0.159 | 3.36 | 18600 | 0.1829 | 1.0622 | | 0.159 | 3.38 | 18700 | 0.1894 | 1.0557 | | 0.159 | 3.4 | 18800 | 0.1895 | 1.0627 | | 0.159 | 3.42 | 18900 | 0.1863 | 1.0362 | | 0.1582 | 3.44 | 19000 | 0.1888 | 1.0491 | | 0.1582 | 3.46 | 19100 | 0.1854 | 1.0483 | | 0.1582 | 3.47 | 19200 | 0.1797 | 0.9787 | | 0.1582 | 3.49 | 19300 | 0.1785 | 1.0086 | | 0.1582 | 3.51 | 19400 | 0.1797 | 0.9915 | | 0.1507 | 3.53 | 19500 | 0.1873 | 1.0266 | | 0.1507 | 3.55 | 19600 | 0.1838 | 1.0299 | | 0.1507 | 3.56 | 19700 | 0.1817 | 1.0355 | | 0.1507 | 3.58 | 19800 | 0.1819 | 1.0271 | | 0.1507 | 3.6 | 19900 | 0.1883 | 1.0248 | | 0.1601 | 3.62 | 20000 | 0.1823 | 1.0406 | | 0.1601 | 3.64 | 20100 | 0.1801 | 1.0261 | | 0.1601 | 3.65 | 20200 | 0.1783 | 1.0329 | | 0.1601 | 3.67 | 20300 | 0.1857 | 1.0162 | | 0.1601 | 3.69 | 20400 | 0.1814 | 1.0212 | | 0.1552 | 3.71 | 20500 | 0.1837 | 1.0232 | | 0.1552 | 3.73 | 20600 | 0.1843 | 1.0314 | | 0.1552 | 3.74 | 20700 | 0.1842 | 1.0258 | | 0.1552 | 3.76 | 20800 | 0.1821 | 1.0479 | | 0.1552 | 3.78 | 20900 | 0.1864 | 1.0459 | | 0.1576 | 3.8 | 21000 | 0.1831 | 1.0364 | | 0.1576 | 3.82 | 21100 | 0.1852 | 1.0271 | | 0.1576 | 3.83 | 21200 | 0.1865 | 1.0204 | | 0.1576 | 3.85 | 21300 | 0.1794 | 1.0324 | | 0.1576 | 3.87 | 21400 | 0.1826 | 1.0315 | | 0.1585 | 3.89 | 21500 | 0.1824 | 1.0327 | | 0.1585 | 3.91 | 21600 | 0.1838 | 1.0208 | | 0.1585 | 3.93 | 21700 | 0.1850 | 1.0199 | | 0.1585 | 3.94 | 21800 | 0.1841 | 1.0050 | | 0.1585 | 3.96 | 21900 | 0.1783 | 1.0003 | | 0.1572 | 3.98 | 22000 | 0.1787 | 1.0115 | | 0.1572 | 4.0 | 22100 | 0.1810 | 1.0235 | | 0.1572 | 4.02 | 22200 | 0.1763 | 1.0191 | | 0.1572 | 4.03 | 22300 | 0.1764 | 1.0332 | | 0.1572 | 4.05 | 22400 | 0.1794 | 1.0429 | | 0.1406 | 4.07 | 22500 | 0.1905 | 1.0288 | | 0.1406 | 4.09 | 22600 | 0.1776 | 1.0244 | | 0.1406 | 4.11 | 22700 | 0.1782 | 1.0451 | | 0.1406 | 4.12 | 22800 | 0.1771 | 1.0387 | | 0.1406 | 4.14 | 22900 | 0.1788 | 1.0435 | | 0.14 | 4.16 | 23000 | 0.1792 | 1.0421 | | 0.14 | 4.18 | 23100 | 0.1841 | 1.0241 | | 0.14 | 4.2 | 23200 | 0.1769 | 1.0546 | | 0.14 | 4.21 | 23300 | 0.1815 | 1.0602 | | 0.14 | 4.23 | 23400 | 0.1784 | 1.0369 | | 0.1394 | 4.25 | 23500 | 0.1809 | 1.0406 | | 0.1394 | 4.27 | 23600 | 0.1744 | 1.0133 | | 0.1394 | 4.29 | 23700 | 0.1771 | 1.0214 | | 0.1394 | 4.31 | 23800 | 0.1765 | 1.0064 | | 0.1394 | 4.32 | 23900 | 0.1793 | 1.0200 | | 0.14 | 4.34 | 24000 | 0.1776 | 1.0352 | | 0.14 | 4.36 | 24100 | 0.1775 | 1.0294 | | 0.14 | 4.38 | 24200 | 0.1763 | 1.0213 | | 0.14 | 4.4 | 24300 | 0.1697 | 1.0302 | | 0.14 | 4.41 | 24400 | 0.1771 | 1.0259 | | 0.1408 | 4.43 | 24500 | 0.1747 | 1.0409 | | 0.1408 | 4.45 | 24600 | 0.1769 | 1.0278 | | 0.1408 | 4.47 | 24700 | 0.1767 | 1.0190 | | 0.1408 | 4.49 | 24800 | 0.1745 | 1.0281 | | 0.1408 | 4.5 | 24900 | 0.1738 | 1.0356 | | 0.1391 | 4.52 | 25000 | 0.1781 | 1.0429 | | 0.1391 | 4.54 | 25100 | 0.1784 | 1.0076 | | 0.1391 | 4.56 | 25200 | 0.1771 | 1.0157 | | 0.1391 | 4.58 | 25300 | 0.1758 | 1.0337 | | 0.1391 | 4.59 | 25400 | 0.1758 | 1.0466 | | 0.1398 | 4.61 | 25500 | 0.1724 | 1.0403 | | 0.1398 | 4.63 | 25600 | 0.1765 | 1.0481 | | 0.1398 | 4.65 | 25700 | 0.1757 | 1.0320 | | 0.1398 | 4.67 | 25800 | 0.1814 | 1.0479 | | 0.1398 | 4.69 | 25900 | 0.1713 | 1.0251 | | 0.1427 | 4.7 | 26000 | 0.1735 | 1.0340 | | 0.1427 | 4.72 | 26100 | 0.1765 | 1.0358 | | 0.1427 | 4.74 | 26200 | 0.1731 | 1.0220 | | 0.1427 | 4.76 | 26300 | 0.1769 | 1.0261 | | 0.1427 | 4.78 | 26400 | 0.1747 | 1.0139 | | 0.1424 | 4.79 | 26500 | 0.1791 | 1.0406 | | 0.1424 | 4.81 | 26600 | 0.1735 | 1.0497 | | 0.1424 | 4.83 | 26700 | 0.1710 | 1.0433 | | 0.1424 | 4.85 | 26800 | 0.1771 | 1.0002 | | 0.1424 | 4.87 | 26900 | 0.1748 | 1.0046 | | 0.1419 | 4.88 | 27000 | 0.1794 | 1.0332 | | 0.1419 | 4.9 | 27100 | 0.1772 | 1.0558 | | 0.1419 | 4.92 | 27200 | 0.1757 | 1.0477 | | 0.1419 | 4.94 | 27300 | 0.1735 | 1.0324 | | 0.1419 | 4.96 | 27400 | 0.1758 | 1.0260 | | 0.1433 | 4.97 | 27500 | 0.1767 | 1.0422 | | 0.1433 | 4.99 | 27600 | 0.1695 | 1.0386 | | 0.1433 | 5.01 | 27700 | 0.1763 | 1.0571 | | 0.1433 | 5.03 | 27800 | 0.1743 | 1.0367 | | 0.1433 | 5.05 | 27900 | 0.1804 | 1.0255 | | 0.1306 | 5.07 | 28000 | 0.1803 | 1.0377 | | 0.1306 | 5.08 | 28100 | 0.1750 | 1.0552 | | 0.1306 | 5.1 | 28200 | 0.1743 | 1.0512 | | 0.1306 | 5.12 | 28300 | 0.1777 | 1.0584 | | 0.1306 | 5.14 | 28400 | 0.1726 | 1.0374 | | 0.123 | 5.16 | 28500 | 0.1776 | 1.0439 | | 0.123 | 5.17 | 28600 | 0.1759 | 1.0682 | | 0.123 | 5.19 | 28700 | 0.1724 | 1.0511 | | 0.123 | 5.21 | 28800 | 0.1677 | 1.0560 | | 0.123 | 5.23 | 28900 | 0.1699 | 1.0421 | | 0.1217 | 5.25 | 29000 | 0.1803 | 1.0370 | | 0.1217 | 5.26 | 29100 | 0.1770 | 1.0474 | | 0.1217 | 5.28 | 29200 | 0.1733 | 1.0332 | | 0.1217 | 5.3 | 29300 | 0.1746 | 1.0158 | | 0.1217 | 5.32 | 29400 | 0.1763 | 1.0341 | | 0.1246 | 5.34 | 29500 | 0.1775 | 1.0348 | | 0.1246 | 5.35 | 29600 | 0.1730 | 1.0492 | | 0.1246 | 5.37 | 29700 | 0.1730 | 1.0503 | | 0.1246 | 5.39 | 29800 | 0.1727 | 1.0437 | | 0.1246 | 5.41 | 29900 | 0.1744 | 1.0539 | | 0.127 | 5.43 | 30000 | 0.1748 | 1.0463 | | 0.127 | 5.44 | 30100 | 0.1746 | 1.0555 | | 0.127 | 5.46 | 30200 | 0.1810 | 1.0558 | | 0.127 | 5.48 | 30300 | 0.1773 | 1.0407 | | 0.127 | 5.5 | 30400 | 0.1722 | 1.0489 | | 0.1276 | 5.52 | 30500 | 0.1720 | 1.0520 | | 0.1276 | 5.54 | 30600 | 0.1777 | 1.0347 | | 0.1276 | 5.55 | 30700 | 0.1685 | 1.0347 | | 0.1276 | 5.57 | 30800 | 0.1659 | 1.0338 | | 0.1276 | 5.59 | 30900 | 0.1756 | 1.0228 | | 0.1246 | 5.61 | 31000 | 0.1717 | 1.0409 | | 0.1246 | 5.63 | 31100 | 0.1764 | 1.0202 | | 0.1246 | 5.64 | 31200 | 0.1693 | 1.0314 | | 0.1246 | 5.66 | 31300 | 0.1731 | 1.0319 | | 0.1246 | 5.68 | 31400 | 0.1688 | 1.0380 | | 0.1271 | 5.7 | 31500 | 0.1671 | 1.0350 | | 0.1271 | 5.72 | 31600 | 0.1676 | 1.0430 | | 0.1271 | 5.73 | 31700 | 0.1656 | 1.0441 | | 0.1271 | 5.75 | 31800 | 0.1664 | 1.0403 | | 0.1271 | 5.77 | 31900 | 0.1691 | 1.0152 | | 0.1259 | 5.79 | 32000 | 0.1702 | 1.0018 | | 0.1259 | 5.81 | 32100 | 0.1664 | 1.0246 | | 0.1259 | 5.82 | 32200 | 0.1737 | 1.0340 | | 0.1259 | 5.84 | 32300 | 0.1742 | 1.0449 | | 0.1259 | 5.86 | 32400 | 0.1707 | 1.0279 | | 0.1273 | 5.88 | 32500 | 0.1697 | 1.0471 | | 0.1273 | 5.9 | 32600 | 0.1668 | 1.0322 | | 0.1273 | 5.92 | 32700 | 0.1706 | 1.0378 | | 0.1273 | 5.93 | 32800 | 0.1704 | 1.0350 | | 0.1273 | 5.95 | 32900 | 0.1725 | 1.0244 | | 0.123 | 5.97 | 33000 | 0.1678 | 1.0447 | | 0.123 | 5.99 | 33100 | 0.1681 | 1.0438 | | 0.123 | 6.01 | 33200 | 0.1689 | 1.0297 | | 0.123 | 6.02 | 33300 | 0.1690 | 1.0333 | | 0.123 | 6.04 | 33400 | 0.1734 | 1.0296 | | 0.1163 | 6.06 | 33500 | 0.1748 | 1.0307 | | 0.1163 | 6.08 | 33600 | 0.1715 | 1.0123 | | 0.1163 | 6.1 | 33700 | 0.1668 | 1.0117 | | 0.1163 | 6.11 | 33800 | 0.1690 | 1.0230 | | 0.1163 | 6.13 | 33900 | 0.1693 | 1.0166 | | 0.1101 | 6.15 | 34000 | 0.1728 | 1.0162 | | 0.1101 | 6.17 | 34100 | 0.1683 | 1.0107 | | 0.1101 | 6.19 | 34200 | 0.1703 | 0.9814 | | 0.1101 | 6.2 | 34300 | 0.1692 | 1.0007 | | 0.1101 | 6.22 | 34400 | 0.1690 | 1.0000 | | 0.1118 | 6.24 | 34500 | 0.1734 | 0.9972 | | 0.1118 | 6.26 | 34600 | 0.1739 | 1.0096 | | 0.1118 | 6.28 | 34700 | 0.1749 | 1.0047 | | 0.1118 | 6.3 | 34800 | 0.1709 | 1.0111 | | 0.1118 | 6.31 | 34900 | 0.1717 | 1.0179 | | 0.1153 | 6.33 | 35000 | 0.1690 | 1.0155 | | 0.1153 | 6.35 | 35100 | 0.1710 | 1.0144 | | 0.1153 | 6.37 | 35200 | 0.1719 | 1.0030 | | 0.1153 | 6.39 | 35300 | 0.1690 | 1.0272 | | 0.1153 | 6.4 | 35400 | 0.1673 | 1.0103 | | 0.1106 | 6.42 | 35500 | 0.1710 | 1.0222 | | 0.1106 | 6.44 | 35600 | 0.1747 | 1.0173 | | 0.1106 | 6.46 | 35700 | 0.1721 | 0.9933 | | 0.1106 | 6.48 | 35800 | 0.1670 | 1.0184 | | 0.1106 | 6.49 | 35900 | 0.1714 | 1.0122 | | 0.1116 | 6.51 | 36000 | 0.1717 | 1.0035 | | 0.1116 | 6.53 | 36100 | 0.1685 | 1.0099 | | 0.1116 | 6.55 | 36200 | 0.1687 | 1.0288 | | 0.1116 | 6.57 | 36300 | 0.1664 | 1.0314 | | 0.1116 | 6.58 | 36400 | 0.1665 | 1.0264 | | 0.1128 | 6.6 | 36500 | 0.1681 | 1.0420 | | 0.1128 | 6.62 | 36600 | 0.1682 | 1.0409 | | 0.1128 | 6.64 | 36700 | 0.1717 | 1.0271 | | 0.1128 | 6.66 | 36800 | 0.1717 | 1.0166 | | 0.1128 | 6.68 | 36900 | 0.1755 | 1.0175 | | 0.1134 | 6.69 | 37000 | 0.1623 | 1.0185 | | 0.1134 | 6.71 | 37100 | 0.1674 | 1.0302 | | 0.1134 | 6.73 | 37200 | 0.1633 | 1.0325 | | 0.1134 | 6.75 | 37300 | 0.1628 | 1.0228 | | 0.1134 | 6.77 | 37400 | 0.1636 | 1.0243 | | 0.1102 | 6.78 | 37500 | 0.1667 | 1.0282 | | 0.1102 | 6.8 | 37600 | 0.1623 | 1.0212 | | 0.1102 | 6.82 | 37700 | 0.1639 | 1.0140 | | 0.1102 | 6.84 | 37800 | 0.1587 | 1.0258 | | 0.1102 | 6.86 | 37900 | 0.1610 | 1.0087 | | 0.1113 | 6.87 | 38000 | 0.1647 | 1.0199 | | 0.1113 | 6.89 | 38100 | 0.1609 | 1.0054 | | 0.1113 | 6.91 | 38200 | 0.1602 | 1.0145 | | 0.1113 | 6.93 | 38300 | 0.1602 | 1.0144 | | 0.1113 | 6.95 | 38400 | 0.1602 | 1.0375 | | 0.1071 | 6.96 | 38500 | 0.1592 | 1.0259 | | 0.1071 | 6.98 | 38600 | 0.1612 | 1.0236 | | 0.1071 | 7.0 | 38700 | 0.1621 | 1.0277 | | 0.1071 | 7.02 | 38800 | 0.1669 | 1.0367 | | 0.1071 | 7.04 | 38900 | 0.1742 | 1.0484 | | 0.1062 | 7.05 | 39000 | 0.1752 | 1.0302 | | 0.1062 | 7.07 | 39100 | 0.1676 | 1.0244 | | 0.1062 | 7.09 | 39200 | 0.1723 | 1.0300 | | 0.1062 | 7.11 | 39300 | 0.1727 | 1.0294 | | 0.1062 | 7.13 | 39400 | 0.1711 | 1.0255 | | 0.1021 | 7.15 | 39500 | 0.1699 | 1.0471 | | 0.1021 | 7.16 | 39600 | 0.1682 | 1.0426 | | 0.1021 | 7.18 | 39700 | 0.1713 | 1.0233 | | 0.1021 | 7.2 | 39800 | 0.1682 | 1.0259 | | 0.1021 | 7.22 | 39900 | 0.1710 | 1.0162 | | 0.103 | 7.24 | 40000 | 0.1725 | 1.0283 | | 0.103 | 7.25 | 40100 | 0.1729 | 1.0264 | | 0.103 | 7.27 | 40200 | 0.1665 | 1.0451 | | 0.103 | 7.29 | 40300 | 0.1671 | 1.0386 | | 0.103 | 7.31 | 40400 | 0.1671 | 1.0316 | | 0.0981 | 7.33 | 40500 | 0.1708 | 1.0257 | | 0.0981 | 7.34 | 40600 | 0.1642 | 1.0152 | | 0.0981 | 7.36 | 40700 | 0.1707 | 1.0110 | | 0.0981 | 7.38 | 40800 | 0.1675 | 1.0186 | | 0.0981 | 7.4 | 40900 | 0.1702 | 1.0123 | | 0.1005 | 7.42 | 41000 | 0.1699 | 1.0159 | | 0.1005 | 7.43 | 41100 | 0.1703 | 1.0219 | | 0.1005 | 7.45 | 41200 | 0.1707 | 1.0194 | | 0.1005 | 7.47 | 41300 | 0.1644 | 1.0016 | | 0.1005 | 7.49 | 41400 | 0.1716 | 0.9941 | | 0.1021 | 7.51 | 41500 | 0.1670 | 1.0159 | | 0.1021 | 7.53 | 41600 | 0.1667 | 1.0033 | | 0.1021 | 7.54 | 41700 | 0.1667 | 1.0176 | | 0.1021 | 7.56 | 41800 | 0.1679 | 1.0194 | | 0.1021 | 7.58 | 41900 | 0.1632 | 1.0418 | | 0.0963 | 7.6 | 42000 | 0.1712 | 1.0152 | | 0.0963 | 7.62 | 42100 | 0.1632 | 1.0364 | | 0.0963 | 7.63 | 42200 | 0.1702 | 1.0229 | | 0.0963 | 7.65 | 42300 | 0.1655 | 1.0179 | | 0.0963 | 7.67 | 42400 | 0.1698 | 1.0329 | | 0.1014 | 7.69 | 42500 | 0.1691 | 1.0398 | | 0.1014 | 7.71 | 42600 | 0.1638 | 1.0487 | | 0.1014 | 7.72 | 42700 | 0.1617 | 1.0210 | | 0.1014 | 7.74 | 42800 | 0.1648 | 1.0124 | | 0.1014 | 7.76 | 42900 | 0.1608 | 1.0202 | | 0.1008 | 7.78 | 43000 | 0.1611 | 1.0353 | | 0.1008 | 7.8 | 43100 | 0.1633 | 1.0319 | | 0.1008 | 7.81 | 43200 | 0.1640 | 1.0032 | | 0.1008 | 7.83 | 43300 | 0.1589 | 0.9985 | | 0.1008 | 7.85 | 43400 | 0.1630 | 0.9975 | | 0.0988 | 7.87 | 43500 | 0.1604 | 1.0053 | | 0.0988 | 7.89 | 43600 | 0.1687 | 1.0063 | | 0.0988 | 7.91 | 43700 | 0.1619 | 1.0096 | | 0.0988 | 7.92 | 43800 | 0.1565 | 0.9901 | | 0.0988 | 7.94 | 43900 | 0.1619 | 0.9742 | | 0.102 | 7.96 | 44000 | 0.1598 | 0.9593 | | 0.102 | 7.98 | 44100 | 0.1635 | 0.9718 | | 0.102 | 8.0 | 44200 | 0.1624 | 0.9903 | | 0.102 | 8.01 | 44300 | 0.1605 | 0.9882 | | 0.102 | 8.03 | 44400 | 0.1657 | 1.0128 | | 0.0961 | 8.05 | 44500 | 0.1651 | 1.0155 | | 0.0961 | 8.07 | 44600 | 0.1680 | 1.0194 | | 0.0961 | 8.09 | 44700 | 0.1694 | 1.0112 | | 0.0961 | 8.1 | 44800 | 0.1665 | 1.0073 | | 0.0961 | 8.12 | 44900 | 0.1612 | 1.0200 | | 0.0894 | 8.14 | 45000 | 0.1652 | 1.0337 | | 0.0894 | 8.16 | 45100 | 0.1626 | 1.0086 | | 0.0894 | 8.18 | 45200 | 0.1639 | 1.0083 | | 0.0894 | 8.19 | 45300 | 0.1634 | 1.0223 | | 0.0894 | 8.21 | 45400 | 0.1631 | 1.0339 | | 0.0887 | 8.23 | 45500 | 0.1640 | 1.0311 | | 0.0887 | 8.25 | 45600 | 0.1661 | 1.0264 | | 0.0887 | 8.27 | 45700 | 0.1650 | 1.0315 | | 0.0887 | 8.29 | 45800 | 0.1624 | 1.0390 | | 0.0887 | 8.3 | 45900 | 0.1624 | 1.0350 | | 0.0884 | 8.32 | 46000 | 0.1615 | 1.0318 | | 0.0884 | 8.34 | 46100 | 0.1628 | 1.0410 | | 0.0884 | 8.36 | 46200 | 0.1627 | 1.0429 | | 0.0884 | 8.38 | 46300 | 0.1644 | 1.0320 | | 0.0884 | 8.39 | 46400 | 0.1633 | 1.0177 | | 0.0893 | 8.41 | 46500 | 0.1654 | 1.0189 | | 0.0893 | 8.43 | 46600 | 0.1598 | 1.0154 | | 0.0893 | 8.45 | 46700 | 0.1618 | 1.0250 | | 0.0893 | 8.47 | 46800 | 0.1639 | 1.0402 | | 0.0893 | 8.48 | 46900 | 0.1616 | 1.0336 | | 0.0869 | 8.5 | 47000 | 0.1613 | 1.0296 | | 0.0869 | 8.52 | 47100 | 0.1648 | 1.0568 | | 0.0869 | 8.54 | 47200 | 0.1625 | 1.0256 | | 0.0869 | 8.56 | 47300 | 0.1609 | 1.0390 | | 0.0869 | 8.57 | 47400 | 0.1606 | 1.0450 | | 0.0894 | 8.59 | 47500 | 0.1605 | 1.0445 | | 0.0894 | 8.61 | 47600 | 0.1660 | 1.0402 | | 0.0894 | 8.63 | 47700 | 0.1618 | 1.0444 | | 0.0894 | 8.65 | 47800 | 0.1669 | 1.0333 | | 0.0894 | 8.66 | 47900 | 0.1627 | 1.0364 | | 0.0885 | 8.68 | 48000 | 0.1616 | 1.0334 | | 0.0885 | 8.7 | 48100 | 0.1626 | 1.0564 | | 0.0885 | 8.72 | 48200 | 0.1624 | 1.0396 | | 0.0885 | 8.74 | 48300 | 0.1623 | 1.0396 | | 0.0885 | 8.76 | 48400 | 0.1612 | 1.0112 | | 0.0888 | 8.77 | 48500 | 0.1638 | 1.0292 | | 0.0888 | 8.79 | 48600 | 0.1639 | 0.9988 | | 0.0888 | 8.81 | 48700 | 0.1618 | 1.0127 | | 0.0888 | 8.83 | 48800 | 0.1584 | 1.0042 | | 0.0888 | 8.85 | 48900 | 0.1615 | 1.0041 | | 0.0887 | 8.86 | 49000 | 0.1637 | 1.0269 | | 0.0887 | 8.88 | 49100 | 0.1627 | 0.9989 | | 0.0887 | 8.9 | 49200 | 0.1583 | 1.0104 | | 0.0887 | 8.92 | 49300 | 0.1600 | 1.0214 | | 0.0887 | 8.94 | 49400 | 0.1599 | 1.0126 | | 0.0893 | 8.95 | 49500 | 0.1595 | 1.0516 | | 0.0893 | 8.97 | 49600 | 0.1625 | 1.0464 | | 0.0893 | 8.99 | 49700 | 0.1595 | 1.0361 | | 0.0893 | 9.01 | 49800 | 0.1614 | 1.0469 | | 0.0893 | 9.03 | 49900 | 0.1612 | 1.0304 | | 0.0834 | 9.04 | 50000 | 0.1643 | 1.0335 | | 0.0834 | 9.06 | 50100 | 0.1640 | 1.0175 | | 0.0834 | 9.08 | 50200 | 0.1655 | 1.0264 | | 0.0834 | 9.1 | 50300 | 0.1678 | 1.0243 | | 0.0834 | 9.12 | 50400 | 0.1659 | 1.0145 | | 0.079 | 9.14 | 50500 | 0.1644 | 1.0316 | | 0.079 | 9.15 | 50600 | 0.1630 | 1.0326 | | 0.079 | 9.17 | 50700 | 0.1634 | 1.0154 | | 0.079 | 9.19 | 50800 | 0.1697 | 1.0095 | | 0.079 | 9.21 | 50900 | 0.1678 | 1.0050 | | 0.078 | 9.23 | 51000 | 0.1626 | 1.0159 | | 0.078 | 9.24 | 51100 | 0.1666 | 1.0238 | | 0.078 | 9.26 | 51200 | 0.1644 | 1.0244 | | 0.078 | 9.28 | 51300 | 0.1655 | 1.0345 | | 0.078 | 9.3 | 51400 | 0.1615 | 1.0237 | | 0.0776 | 9.32 | 51500 | 0.1664 | 1.0180 | | 0.0776 | 9.33 | 51600 | 0.1603 | 1.0208 | | 0.0776 | 9.35 | 51700 | 0.1594 | 1.0230 | | 0.0776 | 9.37 | 51800 | 0.1622 | 1.0201 | | 0.0776 | 9.39 | 51900 | 0.1596 | 1.0039 | | 0.0782 | 9.41 | 52000 | 0.1645 | 1.0204 | | 0.0782 | 9.42 | 52100 | 0.1640 | 1.0318 | | 0.0782 | 9.44 | 52200 | 0.1621 | 1.0290 | | 0.0782 | 9.46 | 52300 | 0.1638 | 1.0318 | | 0.0782 | 9.48 | 52400 | 0.1613 | 1.0217 | | 0.0782 | 9.5 | 52500 | 0.1609 | 1.0261 | | 0.0782 | 9.52 | 52600 | 0.1625 | 1.0101 | | 0.0782 | 9.53 | 52700 | 0.1613 | 1.0058 | | 0.0782 | 9.55 | 52800 | 0.1599 | 1.0068 | | 0.0782 | 9.57 | 52900 | 0.1600 | 1.0110 | | 0.0797 | 9.59 | 53000 | 0.1594 | 1.0171 | | 0.0797 | 9.61 | 53100 | 0.1583 | 1.0124 | | 0.0797 | 9.62 | 53200 | 0.1646 | 1.0093 | | 0.0797 | 9.64 | 53300 | 0.1580 | 1.0201 | | 0.0797 | 9.66 | 53400 | 0.1599 | 1.0207 | | 0.0783 | 9.68 | 53500 | 0.1577 | 1.0226 | | 0.0783 | 9.7 | 53600 | 0.1593 | 1.0160 | | 0.0783 | 9.71 | 53700 | 0.1570 | 1.0173 | | 0.0783 | 9.73 | 53800 | 0.1614 | 1.0299 | | 0.0783 | 9.75 | 53900 | 0.1610 | 1.0184 | | 0.0779 | 9.77 | 54000 | 0.1606 | 1.0173 | | 0.0779 | 9.79 | 54100 | 0.1577 | 1.0032 | | 0.0779 | 9.8 | 54200 | 0.1590 | 1.0070 | | 0.0779 | 9.82 | 54300 | 0.1580 | 1.0257 | | 0.0779 | 9.84 | 54400 | 0.1592 | 1.0108 | | 0.0778 | 9.86 | 54500 | 0.1617 | 0.9907 | | 0.0778 | 9.88 | 54600 | 0.1605 | 1.0189 | | 0.0778 | 9.89 | 54700 | 0.1605 | 1.0177 | | 0.0778 | 9.91 | 54800 | 0.1536 | 1.0275 | | 0.0778 | 9.93 | 54900 | 0.1658 | 1.0282 | | 0.0777 | 9.95 | 55000 | 0.1543 | 1.0385 | | 0.0777 | 9.97 | 55100 | 0.1559 | 1.0375 | | 0.0777 | 9.99 | 55200 | 0.1590 | 1.0215 | | 0.0777 | 10.0 | 55300 | 0.1624 | 1.0242 | | 0.0777 | 10.02 | 55400 | 0.1635 | 1.0244 | | 0.0712 | 10.04 | 55500 | 0.1629 | 1.0298 | | 0.0712 | 10.06 | 55600 | 0.1601 | 1.0299 | | 0.0712 | 10.08 | 55700 | 0.1625 | 1.0117 | | 0.0712 | 10.09 | 55800 | 0.1650 | 1.0233 | | 0.0712 | 10.11 | 55900 | 0.1631 | 1.0061 | | 0.0667 | 10.13 | 56000 | 0.1637 | 1.0226 | | 0.0667 | 10.15 | 56100 | 0.1607 | 1.0042 | | 0.0667 | 10.17 | 56200 | 0.1599 | 1.0117 | | 0.0667 | 10.18 | 56300 | 0.1623 | 1.0246 | | 0.0667 | 10.2 | 56400 | 0.1639 | 1.0294 | | 0.0695 | 10.22 | 56500 | 0.1650 | 1.0232 | | 0.0695 | 10.24 | 56600 | 0.1620 | 1.0289 | | 0.0695 | 10.26 | 56700 | 0.1667 | 1.0209 | | 0.0695 | 10.27 | 56800 | 0.1580 | 1.0163 | | 0.0695 | 10.29 | 56900 | 0.1646 | 1.0293 | | 0.0686 | 10.31 | 57000 | 0.1636 | 1.0106 | | 0.0686 | 10.33 | 57100 | 0.1586 | 1.0044 | | 0.0686 | 10.35 | 57200 | 0.1582 | 1.0213 | | 0.0686 | 10.37 | 57300 | 0.1627 | 1.0151 | | 0.0686 | 10.38 | 57400 | 0.1619 | 1.0248 | | 0.0686 | 10.4 | 57500 | 0.1596 | 1.0098 | | 0.0686 | 10.42 | 57600 | 0.1606 | 1.0031 | | 0.0686 | 10.44 | 57700 | 0.1620 | 1.0046 | | 0.0686 | 10.46 | 57800 | 0.1592 | 1.0018 | | 0.0686 | 10.47 | 57900 | 0.1592 | 1.0058 | | 0.0669 | 10.49 | 58000 | 0.1605 | 0.9961 | | 0.0669 | 10.51 | 58100 | 0.1632 | 1.0102 | | 0.0669 | 10.53 | 58200 | 0.1593 | 1.0061 | | 0.0669 | 10.55 | 58300 | 0.1586 | 1.0091 | | 0.0669 | 10.56 | 58400 | 0.1603 | 1.0085 | | 0.068 | 10.58 | 58500 | 0.1579 | 1.0031 | | 0.068 | 10.6 | 58600 | 0.1591 | 1.0021 | | 0.068 | 10.62 | 58700 | 0.1590 | 1.0163 | | 0.068 | 10.64 | 58800 | 0.1584 | 1.0045 | | 0.068 | 10.65 | 58900 | 0.1594 | 1.0158 | | 0.0693 | 10.67 | 59000 | 0.1568 | 1.0052 | | 0.0693 | 10.69 | 59100 | 0.1581 | 0.9955 | | 0.0693 | 10.71 | 59200 | 0.1622 | 0.9917 | | 0.0693 | 10.73 | 59300 | 0.1580 | 1.0018 | | 0.0693 | 10.75 | 59400 | 0.1601 | 1.0077 | | 0.0699 | 10.76 | 59500 | 0.1605 | 0.9997 | | 0.0699 | 10.78 | 59600 | 0.1585 | 1.0009 | | 0.0699 | 10.8 | 59700 | 0.1541 | 1.0058 | | 0.0699 | 10.82 | 59800 | 0.1583 | 1.0026 | | 0.0699 | 10.84 | 59900 | 0.1592 | 0.9992 | | 0.0671 | 10.85 | 60000 | 0.1590 | 1.0004 | | 0.0671 | 10.87 | 60100 | 0.1585 | 1.0060 | | 0.0671 | 10.89 | 60200 | 0.1579 | 1.0063 | | 0.0671 | 10.91 | 60300 | 0.1582 | 0.9949 | | 0.0671 | 10.93 | 60400 | 0.1562 | 1.0004 | | 0.0661 | 10.94 | 60500 | 0.1560 | 0.9950 | | 0.0661 | 10.96 | 60600 | 0.1564 | 0.9990 | | 0.0661 | 10.98 | 60700 | 0.1552 | 0.9982 | | 0.0661 | 11.0 | 60800 | 0.1596 | 1.0018 | | 0.0661 | 11.02 | 60900 | 0.1618 | 0.9905 | | 0.0634 | 11.03 | 61000 | 0.1652 | 0.9890 | | 0.0634 | 11.05 | 61100 | 0.1649 | 0.9886 | | 0.0634 | 11.07 | 61200 | 0.1668 | 0.9870 | | 0.0634 | 11.09 | 61300 | 0.1663 | 0.9921 | | 0.0634 | 11.11 | 61400 | 0.1650 | 0.9919 | | 0.0587 | 11.13 | 61500 | 0.1674 | 0.9831 | | 0.0587 | 11.14 | 61600 | 0.1633 | 0.9793 | | 0.0587 | 11.16 | 61700 | 0.1665 | 0.9781 | | 0.0587 | 11.18 | 61800 | 0.1642 | 0.9821 | | 0.0587 | 11.2 | 61900 | 0.1638 | 0.9797 | | 0.0581 | 11.22 | 62000 | 0.1628 | 0.9727 | | 0.0581 | 11.23 | 62100 | 0.1661 | 0.9796 | | 0.0581 | 11.25 | 62200 | 0.1641 | 0.9830 | | 0.0581 | 11.27 | 62300 | 0.1601 | 0.9867 | | 0.0581 | 11.29 | 62400 | 0.1626 | 0.9757 | | 0.0584 | 11.31 | 62500 | 0.1632 | 1.0014 | | 0.0584 | 11.32 | 62600 | 0.1626 | 1.0052 | | 0.0584 | 11.34 | 62700 | 0.1586 | 1.0098 | | 0.0584 | 11.36 | 62800 | 0.1597 | 1.0151 | | 0.0584 | 11.38 | 62900 | 0.1624 | 1.0054 | | 0.0589 | 11.4 | 63000 | 0.1618 | 1.0018 | | 0.0589 | 11.41 | 63100 | 0.1635 | 1.0032 | | 0.0589 | 11.43 | 63200 | 0.1654 | 1.0142 | | 0.0589 | 11.45 | 63300 | 0.1646 | 1.0031 | | 0.0589 | 11.47 | 63400 | 0.1618 | 1.0118 | | 0.0579 | 11.49 | 63500 | 0.1634 | 1.0218 | | 0.0579 | 11.51 | 63600 | 0.1616 | 1.0179 | | 0.0579 | 11.52 | 63700 | 0.1603 | 1.0036 | | 0.0579 | 11.54 | 63800 | 0.1610 | 1.0150 | | 0.0579 | 11.56 | 63900 | 0.1605 | 1.0285 | | 0.0572 | 11.58 | 64000 | 0.1621 | 1.0261 | | 0.0572 | 11.6 | 64100 | 0.1625 | 1.0252 | | 0.0572 | 11.61 | 64200 | 0.1677 | 1.0257 | | 0.0572 | 11.63 | 64300 | 0.1656 | 1.0243 | | 0.0572 | 11.65 | 64400 | 0.1669 | 1.0270 | | 0.0592 | 11.67 | 64500 | 0.1605 | 1.0305 | | 0.0592 | 11.69 | 64600 | 0.1633 | 1.0277 | | 0.0592 | 11.7 | 64700 | 0.1606 | 1.0176 | | 0.0592 | 11.72 | 64800 | 0.1618 | 1.0249 | | 0.0592 | 11.74 | 64900 | 0.1609 | 1.0113 | | 0.0595 | 11.76 | 65000 | 0.1609 | 1.0254 | | 0.0595 | 11.78 | 65100 | 0.1662 | 1.0275 | | 0.0595 | 11.79 | 65200 | 0.1652 | 1.0164 | | 0.0595 | 11.81 | 65300 | 0.1638 | 1.0266 | | 0.0595 | 11.83 | 65400 | 0.1589 | 1.0274 | | 0.0588 | 11.85 | 65500 | 0.1607 | 1.0136 | | 0.0588 | 11.87 | 65600 | 0.1592 | 1.0136 | | 0.0588 | 11.88 | 65700 | 0.1581 | 1.0183 | | 0.0588 | 11.9 | 65800 | 0.1587 | 1.0133 | | 0.0588 | 11.92 | 65900 | 0.1596 | 1.0170 | | 0.0558 | 11.94 | 66000 | 0.1590 | 1.0161 | | 0.0558 | 11.96 | 66100 | 0.1597 | 1.0193 | | 0.0558 | 11.98 | 66200 | 0.1590 | 1.0193 | | 0.0558 | 11.99 | 66300 | 0.1608 | 1.0242 | | 0.0558 | 12.01 | 66400 | 0.1642 | 1.0231 | | 0.0555 | 12.03 | 66500 | 0.1679 | 1.0168 | | 0.0555 | 12.05 | 66600 | 0.1674 | 1.0083 | | 0.0555 | 12.07 | 66700 | 0.1658 | 1.0069 | | 0.0555 | 12.08 | 66800 | 0.1661 | 1.0134 | | 0.0555 | 12.1 | 66900 | 0.1682 | 1.0274 | | 0.0508 | 12.12 | 67000 | 0.1702 | 1.0219 | | 0.0508 | 12.14 | 67100 | 0.1694 | 1.0219 | | 0.0508 | 12.16 | 67200 | 0.1667 | 1.0236 | | 0.0508 | 12.17 | 67300 | 0.1672 | 1.0253 | | 0.0508 | 12.19 | 67400 | 0.1640 | 1.0215 | | 0.0513 | 12.21 | 67500 | 0.1649 | 1.0242 | | 0.0513 | 12.23 | 67600 | 0.1687 | 1.0262 | | 0.0513 | 12.25 | 67700 | 0.1655 | 1.0231 | | 0.0513 | 12.26 | 67800 | 0.1692 | 1.0176 | | 0.0513 | 12.28 | 67900 | 0.1675 | 1.0202 | | 0.0519 | 12.3 | 68000 | 0.1644 | 1.0241 | | 0.0519 | 12.32 | 68100 | 0.1651 | 1.0297 | | 0.0519 | 12.34 | 68200 | 0.1661 | 1.0287 | | 0.0519 | 12.36 | 68300 | 0.1665 | 1.0257 | | 0.0519 | 12.37 | 68400 | 0.1685 | 1.0233 | | 0.0522 | 12.39 | 68500 | 0.1636 | 1.0177 | | 0.0522 | 12.41 | 68600 | 0.1709 | 1.0200 | | 0.0522 | 12.43 | 68700 | 0.1684 | 1.0164 | | 0.0522 | 12.45 | 68800 | 0.1666 | 1.0119 | | 0.0522 | 12.46 | 68900 | 0.1683 | 1.0136 | | 0.05 | 12.48 | 69000 | 0.1696 | 1.0127 | | 0.05 | 12.5 | 69100 | 0.1708 | 1.0184 | | 0.05 | 12.52 | 69200 | 0.1654 | 1.0282 | | 0.05 | 12.54 | 69300 | 0.1700 | 1.0235 | | 0.05 | 12.55 | 69400 | 0.1688 | 1.0257 | | 0.0513 | 12.57 | 69500 | 0.1646 | 1.0274 | | 0.0513 | 12.59 | 69600 | 0.1660 | 1.0247 | | 0.0513 | 12.61 | 69700 | 0.1657 | 1.0188 | | 0.0513 | 12.63 | 69800 | 0.1654 | 1.0087 | | 0.0513 | 12.64 | 69900 | 0.1681 | 1.0146 | | 0.0512 | 12.66 | 70000 | 0.1660 | 1.0185 | | 0.0512 | 12.68 | 70100 | 0.1690 | 1.0214 | | 0.0512 | 12.7 | 70200 | 0.1683 | 1.0160 | | 0.0512 | 12.72 | 70300 | 0.1695 | 1.0198 | | 0.0512 | 12.74 | 70400 | 0.1666 | 1.0193 | | 0.0484 | 12.75 | 70500 | 0.1654 | 1.0142 | | 0.0484 | 12.77 | 70600 | 0.1598 | 1.0154 | | 0.0484 | 12.79 | 70700 | 0.1623 | 1.0139 | | 0.0484 | 12.81 | 70800 | 0.1662 | 1.0180 | | 0.0484 | 12.83 | 70900 | 0.1659 | 1.0232 | | 0.0501 | 12.84 | 71000 | 0.1662 | 1.0202 | | 0.0501 | 12.86 | 71100 | 0.1639 | 1.0161 | | 0.0501 | 12.88 | 71200 | 0.1666 | 1.0151 | | 0.0501 | 12.9 | 71300 | 0.1644 | 1.0129 | | 0.0501 | 12.92 | 71400 | 0.1642 | 1.0171 | | 0.0482 | 12.93 | 71500 | 0.1635 | 1.0162 | | 0.0482 | 12.95 | 71600 | 0.1637 | 1.0186 | | 0.0482 | 12.97 | 71700 | 0.1639 | 1.0142 | | 0.0482 | 12.99 | 71800 | 0.1643 | 1.0122 | | 0.0482 | 13.01 | 71900 | 0.1679 | 1.0156 | | 0.0483 | 13.02 | 72000 | 0.1717 | 1.0224 | | 0.0483 | 13.04 | 72100 | 0.1742 | 1.0229 | | 0.0483 | 13.06 | 72200 | 0.1718 | 1.0237 | | 0.0483 | 13.08 | 72300 | 0.1742 | 1.0266 | | 0.0483 | 13.1 | 72400 | 0.1736 | 1.0257 | | 0.0443 | 13.12 | 72500 | 0.1741 | 1.0275 | | 0.0443 | 13.13 | 72600 | 0.1745 | 1.0325 | | 0.0443 | 13.15 | 72700 | 0.1737 | 1.0296 | | 0.0443 | 13.17 | 72800 | 0.1722 | 1.0303 | | 0.0443 | 13.19 | 72900 | 0.1702 | 1.0305 | | 0.0424 | 13.21 | 73000 | 0.1733 | 1.0241 | | 0.0424 | 13.22 | 73100 | 0.1748 | 1.0243 | | 0.0424 | 13.24 | 73200 | 0.1760 | 1.0231 | | 0.0424 | 13.26 | 73300 | 0.1745 | 1.0241 | | 0.0424 | 13.28 | 73400 | 0.1772 | 1.0217 | | 0.0424 | 13.3 | 73500 | 0.1755 | 1.0206 | | 0.0424 | 13.31 | 73600 | 0.1743 | 1.0242 | | 0.0424 | 13.33 | 73700 | 0.1738 | 1.0208 | | 0.0424 | 13.35 | 73800 | 0.1736 | 1.0249 | | 0.0424 | 13.37 | 73900 | 0.1747 | 1.0271 | | 0.0437 | 13.39 | 74000 | 0.1707 | 1.0241 | | 0.0437 | 13.4 | 74100 | 0.1731 | 1.0269 | | 0.0437 | 13.42 | 74200 | 0.1743 | 1.0290 | | 0.0437 | 13.44 | 74300 | 0.1739 | 1.0266 | | 0.0437 | 13.46 | 74400 | 0.1763 | 1.0246 | | 0.0443 | 13.48 | 74500 | 0.1724 | 1.0209 | | 0.0443 | 13.49 | 74600 | 0.1744 | 1.0244 | | 0.0443 | 13.51 | 74700 | 0.1717 | 1.0232 | | 0.0443 | 13.53 | 74800 | 0.1754 | 1.0217 | | 0.0443 | 13.55 | 74900 | 0.1721 | 1.0234 | | 0.0435 | 13.57 | 75000 | 0.1751 | 1.0197 | | 0.0435 | 13.59 | 75100 | 0.1727 | 1.0285 | | 0.0435 | 13.6 | 75200 | 0.1715 | 1.0221 | | 0.0435 | 13.62 | 75300 | 0.1746 | 1.0247 | | 0.0435 | 13.64 | 75400 | 0.1712 | 1.0231 | | 0.0436 | 13.66 | 75500 | 0.1719 | 1.0228 | | 0.0436 | 13.68 | 75600 | 0.1727 | 1.0197 | | 0.0436 | 13.69 | 75700 | 0.1750 | 1.0252 | | 0.0436 | 13.71 | 75800 | 0.1702 | 1.0241 | | 0.0436 | 13.73 | 75900 | 0.1720 | 1.0250 | | 0.0433 | 13.75 | 76000 | 0.1744 | 1.0210 | | 0.0433 | 13.77 | 76100 | 0.1735 | 1.0211 | | 0.0433 | 13.78 | 76200 | 0.1727 | 1.0205 | | 0.0433 | 13.8 | 76300 | 0.1706 | 1.0218 | | 0.0433 | 13.82 | 76400 | 0.1709 | 1.0238 | | 0.0431 | 13.84 | 76500 | 0.1705 | 1.0197 | | 0.0431 | 13.86 | 76600 | 0.1734 | 1.0223 | | 0.0431 | 13.87 | 76700 | 0.1695 | 1.0250 | | 0.0431 | 13.89 | 76800 | 0.1734 | 1.0232 | | 0.0431 | 13.91 | 76900 | 0.1724 | 1.0219 | | 0.041 | 13.93 | 77000 | 0.1706 | 1.0236 | | 0.041 | 13.95 | 77100 | 0.1689 | 1.0220 | | 0.041 | 13.97 | 77200 | 0.1738 | 1.0230 | | 0.041 | 13.98 | 77300 | 0.1727 | 1.0254 | | 0.041 | 14.0 | 77400 | 0.1721 | 1.0261 | | 0.041 | 14.02 | 77500 | 0.1760 | 1.0261 | | 0.041 | 14.04 | 77600 | 0.1772 | 1.0202 | | 0.041 | 14.06 | 77700 | 0.1782 | 1.0202 | | 0.041 | 14.07 | 77800 | 0.1777 | 1.0222 | | 0.041 | 14.09 | 77900 | 0.1787 | 1.0203 | | 0.0383 | 14.11 | 78000 | 0.1790 | 1.0236 | | 0.0383 | 14.13 | 78100 | 0.1812 | 1.0245 | | 0.0383 | 14.15 | 78200 | 0.1778 | 1.0224 | | 0.0383 | 14.16 | 78300 | 0.1771 | 1.0231 | | 0.0383 | 14.18 | 78400 | 0.1782 | 1.0242 | | 0.0391 | 14.2 | 78500 | 0.1785 | 1.0262 | | 0.0391 | 14.22 | 78600 | 0.1791 | 1.0261 | | 0.0391 | 14.24 | 78700 | 0.1770 | 1.0254 | | 0.0391 | 14.25 | 78800 | 0.1810 | 1.0257 | | 0.0391 | 14.27 | 78900 | 0.1794 | 1.0241 | | 0.0387 | 14.29 | 79000 | 0.1774 | 1.0256 | | 0.0387 | 14.31 | 79100 | 0.1774 | 1.0236 | | 0.0387 | 14.33 | 79200 | 0.1759 | 1.0222 | | 0.0387 | 14.35 | 79300 | 0.1787 | 1.0237 | | 0.0387 | 14.36 | 79400 | 0.1788 | 1.0227 | | 0.0372 | 14.38 | 79500 | 0.1789 | 1.0232 | | 0.0372 | 14.4 | 79600 | 0.1771 | 1.0254 | | 0.0372 | 14.42 | 79700 | 0.1777 | 1.0244 | | 0.0372 | 14.44 | 79800 | 0.1791 | 1.0225 | | 0.0372 | 14.45 | 79900 | 0.1786 | 1.0237 | | 0.0385 | 14.47 | 80000 | 0.1782 | 1.0243 | | 0.0385 | 14.49 | 80100 | 0.1770 | 1.0236 | | 0.0385 | 14.51 | 80200 | 0.1782 | 1.0240 | | 0.0385 | 14.53 | 80300 | 0.1764 | 1.0243 | | 0.0385 | 14.54 | 80400 | 0.1748 | 1.0248 | | 0.039 | 14.56 | 80500 | 0.1758 | 1.0232 | | 0.039 | 14.58 | 80600 | 0.1763 | 1.0246 | | 0.039 | 14.6 | 80700 | 0.1770 | 1.0220 | | 0.039 | 14.62 | 80800 | 0.1788 | 1.0225 | | 0.039 | 14.63 | 80900 | 0.1781 | 1.0230 | | 0.039 | 14.65 | 81000 | 0.1779 | 1.0230 | | 0.039 | 14.67 | 81100 | 0.1755 | 1.0212 | | 0.039 | 14.69 | 81200 | 0.1765 | 1.0226 | | 0.039 | 14.71 | 81300 | 0.1787 | 1.0241 | | 0.039 | 14.72 | 81400 | 0.1782 | 1.0250 | | 0.0368 | 14.74 | 81500 | 0.1780 | 1.0248 | | 0.0368 | 14.76 | 81600 | 0.1782 | 1.0242 | | 0.0368 | 14.78 | 81700 | 0.1782 | 1.0242 | | 0.0368 | 14.8 | 81800 | 0.1792 | 1.0241 | | 0.0368 | 14.82 | 81900 | 0.1796 | 1.0238 | | 0.0378 | 14.83 | 82000 | 0.1795 | 1.0236 | | 0.0378 | 14.85 | 82100 | 0.1796 | 1.0239 | | 0.0378 | 14.87 | 82200 | 0.1792 | 1.0236 | | 0.0378 | 14.89 | 82300 | 0.1789 | 1.0239 | | 0.0378 | 14.91 | 82400 | 0.1788 | 1.0238 | | 0.0386 | 14.92 | 82500 | 0.1787 | 1.0239 | | 0.0386 | 14.94 | 82600 | 0.1786 | 1.0236 | | 0.0386 | 14.96 | 82700 | 0.1786 | 1.0237 | | 0.0386 | 14.98 | 82800 | 0.1787 | 1.0239 | | 0.0386 | 15.0 | 82900 | 0.1788 | 1.0238 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
fgaim/t5-small-squad-v2
fgaim
2022-01-30T21:35:54Z
34
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:c4", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en datasets: - c4 - squad tags: - text2text-generation widget: - text: "question: What is the atomic number for oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8." - text: "question: What is the chemical symbol of Oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8." license: apache-2.0 --- T5-small for QA --- [Google's T5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) pre-trained on the [C4](https://huggingface.co/datasets/c4) dataset, fine-tuned for Question-Answering on [SQuAD v2](https://huggingface.co/datasets/squad_v2) with the following hyperparameters: ``` optimizer=adamw_hf learning_rate=3e-5 adam_beta1=0.9 adam_beta2=0.999 adam_epsilon=1e-08 num_train_epochs=2 per_device_train_batch_size=12 ``` Usage --- The input [context and question] has to be prepared in a specific way as follows: ```python from transformers import pipeline def prep_input(_context, _question): return " ".join(["question:", _question.strip(), "context:", _context.strip()]) t5qa = pipeline("text2text-generation", "fgaim/t5-small-squad-v2") context = """ Oxygen is a chemical element with symbol O and atomic number 8. It is a member of the chalcogen group on the periodic table and is a highly reactive nonmetal and oxidizing agent that readily forms compounds (notably oxides) with most elements. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. At standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula O. """ t5qa(prep_input(context, "How many atoms combine to form dioxygen?")) # [{'generated_text': 'two'}] t5qa(prep_input(context, "What element makes up almost half of the earth's crust by mass?")) # [{'generated_text': 'oxygen'}] t5qa(prep_input(context, "What are the most abundent elements of the universe by mass?")) # [{'generated_text': 'hydrogen and helium'}] ```
z-uo/vits-male-it
z-uo
2022-01-30T20:20:35Z
4
1
transformers
[ "transformers", "tensorboard", "text-to-speech", "it", "dataset:z-uo/female-LJSpeech-italian", "endpoints_compatible", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - text-to-speech language: - it model-index: - name: vits-male-it results: [] datasets: - z-uo/female-LJSpeech-italian --- # Coqui Model for TTS ``` pip install TTS git clone https://huggingface.co/z-uo/vits-male-it # predict one tts --text "ciao pluto" --model_path "vits-male-it/best_model.pth.tar" --config_path "vits-male-it/config.json" # predict server tts-server --model_path "vits-male-it/best_model.pth.tar" --config_path "vits-male-it/config.json" firefox localhost:5002 ``` More information about training script in [this repo](https://github.com/nicolalandro/train_coqui_tts_ita).
Kayvane/distilbert-complaints-product
Kayvane
2022-01-30T19:15:13Z
33
3
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:consumer_complaints", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - consumer_complaints model-index: - name: distilbert-complaints-product results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-complaints-product This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided ## Model description A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint. ## Intended uses & limitations This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation: - **Infrastructure:** Terraform - **ML Ops:** HuggingFace (Datasets, Hub, Transformers) - **Ml Explainability:** SHAP - **Cloud:** AWS - Model Hosting: Lambda - DB Backend: DynamoDB - Orchestration: Step-Functions - UI Hosting: EC2 - Routing: API Gateway - **UI:** Budibase ## Training and evaluation data consumer_complaints dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
Erfan/mT5-base_Farsi_Title_Generator
Erfan
2022-01-30T18:00:42Z
11
2
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "Title-Generation", "fa", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- language: - fa tags: - Title-Generation metrics: - ROUGH ---
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small
tomascufaro
2022-01-30T17:23:59Z
14
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-small This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3763 - Wer: 0.1791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2277 | 0.26 | 400 | 0.2601 | 0.2291 | | 0.2932 | 0.53 | 800 | 0.2950 | 0.2670 | | 0.3019 | 0.79 | 1200 | 0.3247 | 0.2766 | | 0.2987 | 1.05 | 1600 | 0.3031 | 0.2606 | | 0.261 | 1.32 | 2000 | 0.2994 | 0.2620 | | 0.2651 | 1.58 | 2400 | 0.3134 | 0.2700 | | 0.264 | 1.85 | 2800 | 0.3016 | 0.2641 | | 0.2475 | 2.11 | 3200 | 0.3135 | 0.2661 | | 0.2269 | 2.37 | 3600 | 0.3029 | 0.2562 | | 0.2389 | 2.64 | 4000 | 0.3035 | 0.2549 | | 0.2319 | 2.9 | 4400 | 0.3022 | 0.2551 | | 0.2123 | 3.16 | 4800 | 0.3256 | 0.2638 | | 0.2094 | 3.43 | 5200 | 0.3227 | 0.2712 | | 0.2121 | 3.69 | 5600 | 0.3085 | 0.2596 | | 0.207 | 3.96 | 6000 | 0.3041 | 0.2597 | | 0.1809 | 4.22 | 6400 | 0.3122 | 0.2524 | | 0.1846 | 4.48 | 6800 | 0.3254 | 0.2579 | | 0.1885 | 4.75 | 7200 | 0.2958 | 0.2437 | | 0.1923 | 5.01 | 7600 | 0.3136 | 0.2502 | | 0.1626 | 5.27 | 8000 | 0.3059 | 0.2488 | | 0.1704 | 5.54 | 8400 | 0.3082 | 0.2515 | | 0.1674 | 5.8 | 8800 | 0.3196 | 0.2509 | | 0.1691 | 6.06 | 9200 | 0.3193 | 0.25 | | 0.1499 | 6.33 | 9600 | 0.3529 | 0.2635 | | 0.1568 | 6.59 | 10000 | 0.3241 | 0.2481 | | 0.1538 | 6.86 | 10400 | 0.3354 | 0.2476 | | 0.1503 | 7.12 | 10800 | 0.3180 | 0.2402 | | 0.136 | 7.38 | 11200 | 0.3230 | 0.2397 | | 0.1413 | 7.65 | 11600 | 0.3178 | 0.2451 | | 0.147 | 7.91 | 12000 | 0.3170 | 0.2389 | | 0.1341 | 8.17 | 12400 | 0.3380 | 0.2501 | | 0.1329 | 8.44 | 12800 | 0.3265 | 0.2414 | | 0.1314 | 8.7 | 13200 | 0.3281 | 0.2482 | | 0.1312 | 8.97 | 13600 | 0.3259 | 0.2539 | | 0.12 | 9.23 | 14000 | 0.3291 | 0.2424 | | 0.1193 | 9.49 | 14400 | 0.3302 | 0.2412 | | 0.1189 | 9.76 | 14800 | 0.3376 | 0.2407 | | 0.1217 | 10.02 | 15200 | 0.3334 | 0.2400 | | 0.1118 | 10.28 | 15600 | 0.3359 | 0.2368 | | 0.1139 | 10.55 | 16000 | 0.3239 | 0.2335 | | 0.1106 | 10.81 | 16400 | 0.3374 | 0.2352 | | 0.1081 | 11.07 | 16800 | 0.3585 | 0.2434 | | 0.1063 | 11.34 | 17200 | 0.3639 | 0.2472 | | 0.1041 | 11.6 | 17600 | 0.3399 | 0.2423 | | 0.1062 | 11.87 | 18000 | 0.3410 | 0.2388 | | 0.1012 | 12.13 | 18400 | 0.3597 | 0.2413 | | 0.0953 | 12.39 | 18800 | 0.3440 | 0.2296 | | 0.097 | 12.66 | 19200 | 0.3440 | 0.2269 | | 0.0968 | 12.92 | 19600 | 0.3498 | 0.2333 | | 0.0902 | 13.18 | 20000 | 0.3471 | 0.2290 | | 0.0868 | 13.45 | 20400 | 0.3462 | 0.2266 | | 0.0892 | 13.71 | 20800 | 0.3373 | 0.2227 | | 0.0902 | 13.97 | 21200 | 0.3377 | 0.2240 | | 0.0846 | 14.24 | 21600 | 0.3484 | 0.2237 | | 0.0839 | 14.5 | 22000 | 0.3706 | 0.2260 | | 0.0834 | 14.77 | 22400 | 0.3430 | 0.2268 | | 0.0841 | 15.03 | 22800 | 0.3489 | 0.2259 | | 0.076 | 15.29 | 23200 | 0.3626 | 0.2281 | | 0.0771 | 15.56 | 23600 | 0.3624 | 0.2268 | | 0.0773 | 15.82 | 24000 | 0.3440 | 0.2252 | | 0.0759 | 16.08 | 24400 | 0.3532 | 0.2170 | | 0.0745 | 16.35 | 24800 | 0.3686 | 0.2188 | | 0.0713 | 16.61 | 25200 | 0.3691 | 0.2195 | | 0.0718 | 16.88 | 25600 | 0.3470 | 0.2108 | | 0.0685 | 17.14 | 26000 | 0.3756 | 0.2179 | | 0.0689 | 17.4 | 26400 | 0.3542 | 0.2149 | | 0.0671 | 17.67 | 26800 | 0.3461 | 0.2165 | | 0.0737 | 17.93 | 27200 | 0.3473 | 0.2238 | | 0.0669 | 18.19 | 27600 | 0.3441 | 0.2138 | | 0.0629 | 18.46 | 28000 | 0.3721 | 0.2155 | | 0.0632 | 18.72 | 28400 | 0.3667 | 0.2126 | | 0.0647 | 18.98 | 28800 | 0.3579 | 0.2097 | | 0.0603 | 19.25 | 29200 | 0.3670 | 0.2130 | | 0.0604 | 19.51 | 29600 | 0.3750 | 0.2142 | | 0.0619 | 19.78 | 30000 | 0.3804 | 0.2160 | | 0.0603 | 20.04 | 30400 | 0.3764 | 0.2124 | | 0.0577 | 20.3 | 30800 | 0.3858 | 0.2097 | | 0.0583 | 20.57 | 31200 | 0.3520 | 0.2089 | | 0.0561 | 20.83 | 31600 | 0.3615 | 0.2079 | | 0.0545 | 21.09 | 32000 | 0.3824 | 0.2032 | | 0.0525 | 21.36 | 32400 | 0.3858 | 0.2091 | | 0.0524 | 21.62 | 32800 | 0.3956 | 0.2099 | | 0.0527 | 21.89 | 33200 | 0.3667 | 0.2025 | | 0.0514 | 22.15 | 33600 | 0.3708 | 0.2032 | | 0.0506 | 22.41 | 34000 | 0.3815 | 0.2053 | | 0.0478 | 22.68 | 34400 | 0.3671 | 0.2007 | | 0.049 | 22.94 | 34800 | 0.3758 | 0.2003 | | 0.0477 | 23.2 | 35200 | 0.3786 | 0.2014 | | 0.045 | 23.47 | 35600 | 0.3732 | 0.1998 | | 0.0426 | 23.73 | 36000 | 0.3737 | 0.2010 | | 0.0444 | 23.99 | 36400 | 0.3600 | 0.1990 | | 0.0433 | 24.26 | 36800 | 0.3689 | 0.1976 | | 0.0442 | 24.52 | 37200 | 0.3787 | 0.1968 | | 0.0419 | 24.79 | 37600 | 0.3652 | 0.1961 | | 0.042 | 25.05 | 38000 | 0.3820 | 0.1964 | | 0.0419 | 25.31 | 38400 | 0.3786 | 0.1919 | | 0.0376 | 25.58 | 38800 | 0.3842 | 0.1934 | | 0.0385 | 25.84 | 39200 | 0.3767 | 0.1900 | | 0.0396 | 26.1 | 39600 | 0.3688 | 0.1888 | | 0.0371 | 26.37 | 40000 | 0.3815 | 0.1894 | | 0.0363 | 26.63 | 40400 | 0.3748 | 0.1878 | | 0.0377 | 26.9 | 40800 | 0.3713 | 0.1852 | | 0.0352 | 27.16 | 41200 | 0.3734 | 0.1851 | | 0.0355 | 27.42 | 41600 | 0.3776 | 0.1874 | | 0.0333 | 27.69 | 42000 | 0.3867 | 0.1841 | | 0.0348 | 27.95 | 42400 | 0.3823 | 0.1839 | | 0.0329 | 28.21 | 42800 | 0.3795 | 0.1822 | | 0.0325 | 28.48 | 43200 | 0.3711 | 0.1813 | | 0.0328 | 28.74 | 43600 | 0.3721 | 0.1781 | | 0.0312 | 29.0 | 44000 | 0.3803 | 0.1816 | | 0.0318 | 29.27 | 44400 | 0.3758 | 0.1794 | | 0.0302 | 29.53 | 44800 | 0.3792 | 0.1784 | | 0.0339 | 29.8 | 45200 | 0.3763 | 0.1791 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
anuragshas/wav2vec2-xls-r-1b-hi-cv8
anuragshas
2022-01-30T15:20:16Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hi", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - hi license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.6780 - Wer: 0.3670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.514 | 2.07 | 400 | 1.4589 | 0.8531 | | 1.4289 | 4.15 | 800 | 0.8940 | 0.6475 | | 1.276 | 6.22 | 1200 | 0.7743 | 0.6089 | | 1.2213 | 8.29 | 1600 | 0.6919 | 0.4973 | | 1.1522 | 10.36 | 2000 | 0.6635 | 0.4588 | | 1.0914 | 12.44 | 2400 | 0.6839 | 0.4586 | | 1.0499 | 14.51 | 2800 | 0.7151 | 0.4467 | | 1.0238 | 16.58 | 3200 | 0.6824 | 0.4436 | | 0.9963 | 18.65 | 3600 | 0.6872 | 0.4437 | | 0.9728 | 20.73 | 4000 | 0.7047 | 0.4244 | | 0.9373 | 22.8 | 4400 | 0.6569 | 0.4189 | | 0.9028 | 24.87 | 4800 | 0.6623 | 0.4094 | | 0.8759 | 26.94 | 5200 | 0.6723 | 0.4152 | | 0.8824 | 29.02 | 5600 | 0.6467 | 0.4017 | | 0.8371 | 31.09 | 6000 | 0.6911 | 0.4080 | | 0.8205 | 33.16 | 6400 | 0.7145 | 0.4063 | | 0.7837 | 35.23 | 6800 | 0.7037 | 0.3930 | | 0.7708 | 37.31 | 7200 | 0.6925 | 0.3840 | | 0.7359 | 39.38 | 7600 | 0.7034 | 0.3829 | | 0.7153 | 41.45 | 8000 | 0.7030 | 0.3794 | | 0.7127 | 43.52 | 8400 | 0.6823 | 0.3761 | | 0.6884 | 45.6 | 8800 | 0.6854 | 0.3711 | | 0.6835 | 47.67 | 9200 | 0.6723 | 0.3665 | | 0.6703 | 49.74 | 9600 | 0.6773 | 0.3668 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
huggingtweets/sardoche_lol
huggingtweets
2022-01-30T15:00:56Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/sardoche_lol/1643554725712/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1450594532186263560/hiL4EyAm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sardoche</div> <div style="text-align: center; font-size: 14px;">@sardoche_lol</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Sardoche. | Data | Sardoche | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 242 | | Short tweets | 374 | | Tweets kept | 2633 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/24g273w4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sardoche_lol's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3k2srh5a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3k2srh5a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sardoche_lol') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
imvladikon/charbert-roberta-wiki
imvladikon
2022-01-30T11:37:26Z
10
1
transformers
[ "transformers", "pytorch", "language model", "en", "dataset:wikipedia", "arxiv:2011.01513", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en tags: - language model datasets: - wikipedia --- pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT) ``` @misc{ma2020charbert, title={CharBERT: Character-aware Pre-trained Language Model}, author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu}, year={2020}, eprint={2011.01513}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
imvladikon/charbert-bert-wiki
imvladikon
2022-01-30T11:35:48Z
63
3
transformers
[ "transformers", "pytorch", "language model", "en", "dataset:wikipedia", "arxiv:2011.01513", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en tags: - language model datasets: - wikipedia --- pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT) ``` @misc{ma2020charbert, title={CharBERT: Character-aware Pre-trained Language Model}, author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu}, year={2020}, eprint={2011.01513}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
pinecone/mpnet-retriever-discourse
pinecone
2022-01-30T07:23:58Z
4
2
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "question-answering", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - question-answering --- # MPNet Retriever (Discourse) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used as a retriever model in open-domain question-answering tasks. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Training The model was fine-tuned on question-answer pairs scraper from several ML-focused Discourse forums \[HuggingFace, PyTorch, Streamlit, TensorFlow\]. The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 105 with parameters: ``` {'batch_size': 12} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors Fine-tuned by [James Briggs](https://www.youtube.com/c/jamesbriggs) at [Pinecone](https://www.pinecone.io). Learn more about the [fine-tuning process here](https://www.pinecone.io/learn/retriever-models/).
jcmc/wav2vec-1b-cv8-ir-n
jcmc
2022-01-30T07:16:19Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ga-IE license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 0.9810 - Wer: 0.4761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.2427 | 15.15 | 500 | 1.4632 | 0.9481 | | 1.3128 | 30.3 | 1000 | 0.8662 | 0.6195 | | 0.9403 | 45.45 | 1500 | 0.8163 | 0.5169 | | 0.6868 | 60.61 | 2000 | 0.8661 | 0.4858 | | 0.563 | 75.76 | 2500 | 0.9447 | 0.4867 | | 0.4887 | 90.91 | 3000 | 0.9650 | 0.4823 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
pablouribe/xls-r-ab-test
pablouribe
2022-01-30T05:13:34Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset. It achieves the following results on the evaluation set: - Loss: 133.2596 - Wer: 19.1571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
anton-l/wav2vec2-xls-r-common_voice-tr-ft-100sh
anton-l
2022-01-30T02:42:22Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer model-index: - name: wav2vec2-xls-r-common_voice-tr-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-common_voice-tr-ft This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.5806 - Wer: 0.3998 - Cer: 0.1053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | 0.5369 | 17.0 | 500 | 0.6021 | 0.6366 | 0.1727 | | 0.3542 | 34.0 | 1000 | 0.5265 | 0.4906 | 0.1278 | | 0.1866 | 51.0 | 1500 | 0.5805 | 0.4768 | 0.1261 | | 0.1674 | 68.01 | 2000 | 0.5336 | 0.4518 | 0.1186 | | 0.19 | 86.0 | 2500 | 0.5676 | 0.4427 | 0.1151 | | 0.0815 | 103.0 | 3000 | 0.5510 | 0.4268 | 0.1125 | | 0.0545 | 120.0 | 3500 | 0.5608 | 0.4175 | 0.1099 | | 0.0299 | 137.01 | 4000 | 0.5875 | 0.4222 | 0.1124 | | 0.0267 | 155.0 | 4500 | 0.5882 | 0.4026 | 0.1063 | | 0.025 | 172.0 | 5000 | 0.5806 | 0.3998 | 0.1053 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
huggingtweets/goando-kenmcalinn-voluntas
huggingtweets
2022-01-30T02:24:29Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/goando-kenmcalinn-voluntas/1643509465268/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1145832571214815232/KYNcOP04_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1314997569475547137/4x1-5ejx_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/858198338444836864/OFlImt8f_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Go Ando / PREDUCTS / THE GUILD & Ken McAlinn & V</div> <div style="text-align: center; font-size: 14px;">@goando-kenmcalinn-voluntas</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Go Ando / PREDUCTS / THE GUILD & Ken McAlinn & V. | Data | Go Ando / PREDUCTS / THE GUILD | Ken McAlinn | V | | --- | --- | --- | --- | | Tweets downloaded | 3247 | 3250 | 3246 | | Retweets | 91 | 22 | 1040 | | Short tweets | 1680 | 2144 | 698 | | Tweets kept | 1476 | 1084 | 1508 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kzei9u5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando-kenmcalinn-voluntas's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mdna8jc) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mdna8jc/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/goando-kenmcalinn-voluntas') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/shiikazuo
huggingtweets
2022-01-30T01:27:28Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/shiikazuo/1643506044134/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/3624876884/b16d250401cc357c5be9859f7ba3db8f_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">志位和夫</div> <div style="text-align: center; font-size: 14px;">@shiikazuo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 志位和夫. | Data | 志位和夫 | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 38 | | Short tweets | 35 | | Tweets kept | 3176 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/243t6rzm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shiikazuo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eiaaoe96) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eiaaoe96/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/shiikazuo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Adil617/wav2vec2-base-timit-demo-colab
Adil617
2022-01-29T21:05:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9314 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 8.686 | 0.16 | 20 | 13.6565 | 1.0 | | 8.0711 | 0.32 | 40 | 12.5379 | 1.0 | | 6.9967 | 0.48 | 60 | 9.7215 | 1.0 | | 5.2368 | 0.64 | 80 | 5.8459 | 1.0 | | 3.4499 | 0.8 | 100 | 3.3413 | 1.0 | | 3.1261 | 0.96 | 120 | 3.2858 | 1.0 | | 3.0654 | 1.12 | 140 | 3.1945 | 1.0 | | 3.0421 | 1.28 | 160 | 3.1296 | 1.0 | | 3.0035 | 1.44 | 180 | 3.1172 | 1.0 | | 3.0067 | 1.6 | 200 | 3.1217 | 1.0 | | 2.9867 | 1.76 | 220 | 3.0715 | 1.0 | | 2.9653 | 1.92 | 240 | 3.0747 | 1.0 | | 2.9629 | 2.08 | 260 | 2.9984 | 1.0 | | 2.9462 | 2.24 | 280 | 2.9991 | 1.0 | | 2.9391 | 2.4 | 300 | 3.0391 | 1.0 | | 2.934 | 2.56 | 320 | 2.9682 | 1.0 | | 2.9193 | 2.72 | 340 | 2.9701 | 1.0 | | 2.8985 | 2.88 | 360 | 2.9314 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingtweets/tylerrjoseph
huggingtweets
2022-01-29T12:35:08Z
3
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/tylerrjoseph/1643459612585/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1461794294336045066/SUrpcEaz_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">tyler jøseph</div> <div style="text-align: center; font-size: 14px;">@tylerrjoseph</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from tyler jøseph. | Data | tyler jøseph | | --- | --- | | Tweets downloaded | 474 | | Retweets | 54 | | Short tweets | 79 | | Tweets kept | 341 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xiz1b44/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tylerrjoseph's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2mp0omnb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2mp0omnb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tylerrjoseph') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
kika2000/wav2vec2-large-xls-r-300m-kika5_my-colab
kika2000
2022-01-29T12:28:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-kika5_my-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-kika5_my-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3860 - Wer: 0.3505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0007 | 4.82 | 400 | 0.6696 | 0.8283 | | 0.2774 | 9.64 | 800 | 0.4231 | 0.5476 | | 0.1182 | 14.46 | 1200 | 0.4253 | 0.5102 | | 0.0859 | 19.28 | 1600 | 0.4600 | 0.4866 | | 0.0693 | 24.1 | 2000 | 0.4030 | 0.4533 | | 0.0611 | 28.92 | 2400 | 0.4189 | 0.4412 | | 0.0541 | 33.73 | 2800 | 0.4272 | 0.4380 | | 0.0478 | 38.55 | 3200 | 0.4537 | 0.4505 | | 0.0428 | 43.37 | 3600 | 0.4349 | 0.4181 | | 0.038 | 48.19 | 4000 | 0.4562 | 0.4199 | | 0.0345 | 53.01 | 4400 | 0.4209 | 0.4310 | | 0.0316 | 57.83 | 4800 | 0.4336 | 0.4058 | | 0.0288 | 62.65 | 5200 | 0.4004 | 0.3920 | | 0.025 | 67.47 | 5600 | 0.4115 | 0.3857 | | 0.0225 | 72.29 | 6000 | 0.4296 | 0.3948 | | 0.0182 | 77.11 | 6400 | 0.3963 | 0.3772 | | 0.0165 | 81.93 | 6800 | 0.3921 | 0.3687 | | 0.0152 | 86.75 | 7200 | 0.3969 | 0.3592 | | 0.0133 | 91.57 | 7600 | 0.3803 | 0.3527 | | 0.0118 | 96.39 | 8000 | 0.3860 | 0.3505 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/_ikeay
huggingtweets
2022-01-29T08:38:34Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/_ikeay/1643445509714/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1438483410503176195/v_ghm6Un_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">いけあや(意識が低い方)</div> <div style="text-align: center; font-size: 14px;">@_ikeay</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from いけあや(意識が低い方). | Data | いけあや(意識が低い方) | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 26 | | Short tweets | 2264 | | Tweets kept | 959 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2c6c03ss/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_ikeay's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/r85zooae) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/r85zooae/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/_ikeay') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/eri_razapii
huggingtweets
2022-01-29T08:31:32Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/eri_razapii/1643445087789/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1463699400405164034/aRY9jlnO_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">えりらざぴ | SHE CEO/CCO</div> <div style="text-align: center; font-size: 14px;">@eri_razapii</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from えりらざぴ | SHE CEO/CCO. | Data | えりらざぴ | SHE CEO/CCO | | --- | --- | | Tweets downloaded | 3232 | | Retweets | 1778 | | Short tweets | 831 | | Tweets kept | 623 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2eraewg4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eri_razapii's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30n8ile8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30n8ile8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eri_razapii') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/twentyonepilots
huggingtweets
2022-01-29T07:40:09Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/twentyonepilots/1643442004355/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1379847503324057601/LH84R4zr_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">twenty one pilots</div> <div style="text-align: center; font-size: 14px;">@twentyonepilots</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from twenty one pilots. | Data | twenty one pilots | | --- | --- | | Tweets downloaded | 3190 | | Retweets | 537 | | Short tweets | 287 | | Tweets kept | 2366 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cw9xn7c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @twentyonepilots's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/trh1am21) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/trh1am21/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/twentyonepilots') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
k-partha/decision_bert_bio
k-partha
2022-01-29T03:36:59Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2109.06402", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Rates Twitter biographies on decision-making preference: Thinking or Feeling. Roughly corresponds to [agreeableness.](https://en.wikipedia.org/wiki/Agreeableness) Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit! Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Remember that models employ pure statistical reasoning (and may consequently make no sense sometimes.) Have fun! Note: Performance on inputs other than Twitter biographies [the training data source] is not verified. For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402).
facebook/tts_transformer-vi-cv7
facebook
2022-01-28T23:31:48Z
29
11
fairseq
[ "fairseq", "audio", "text-to-speech", "vi", "dataset:common_voice", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: vi datasets: - common_voice widget: - text: "Xin chào, đây là một cuộc chạy thử nghiệm." example_title: "Hello, this is a test run." --- # tts_transformer-vi-cv7 [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - Vietnamese - Single-speaker male voice - Trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/tts_transformer-vi-cv7", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Xin chào, đây là một cuộc chạy thử nghiệm." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/tts_transformer-ar-cv7
facebook
2022-01-28T23:31:25Z
51
8
fairseq
[ "fairseq", "audio", "text-to-speech", "ar", "dataset:common_voice", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: ar datasets: - common_voice widget: - text: "مرحبًا ، هذا اختبار تشغيل." example_title: "Hello, this is a test run." --- # tts_transformer-ar-cv7 [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - Arabic - Single-speaker male voice - Trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/tts_transformer-ar-cv7", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "مرحبًا ، هذا اختبار تشغيل." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/tts_transformer-tr-cv7
facebook
2022-01-28T23:30:54Z
14
10
fairseq
[ "fairseq", "audio", "text-to-speech", "tr", "dataset:common_voice", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: tr datasets: - common_voice widget: - text: "Merhaba, bu bir deneme çalışmasıdır." example_title: "Hello, this is a test run." --- # tts_transformer-tr-cv7 [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - Turkish - Single-speaker male voice - Trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/tts_transformer-tr-cv7", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Merhaba, bu bir deneme çalışmasıdır." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/tts_transformer-zh-cv7_css10
facebook
2022-01-28T23:30:17Z
32
85
fairseq
[ "fairseq", "audio", "text-to-speech", "zh", "dataset:common_voice", "dataset:css10", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: zh datasets: - common_voice - css10 widget: - text: "您好,这是试运行。" example_title: "Hello, this is a test run." --- # tts_transformer-zh-cv7_css10 [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - Simplified Chinese - Single-speaker female voice - Pre-trained on [Common Voice v7](https://commonvoice.mozilla.org/en/datasets), fine-tuned on [CSS10](https://github.com/Kyubyong/css10) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/tts_transformer-zh-cv7_css10", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "您好,这是试运行。" sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/tts_transformer-en-200_speaker-cv4
facebook
2022-01-28T23:27:28Z
11
2
fairseq
[ "fairseq", "audio", "text-to-speech", "multi-speaker", "en", "dataset:common_voice", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech - multi-speaker language: en datasets: - common_voice widget: - text: "Hello, this is a test run." example_title: "Hello, this is a test run." --- # tts_transformer-en-200_speaker-cv4 [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - English - 200 male/female voices (random speaker when using the widget) - Trained on [Common Voice v4](https://commonvoice.mozilla.org/en/datasets) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/tts_transformer-en-200_speaker-cv4", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Hello, this is a test run." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/tts_transformer-en-ljspeech
facebook
2022-01-28T23:26:35Z
36
6
fairseq
[ "fairseq", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: en datasets: - ljspeech widget: - text: "Hello, this is a test run." example_title: "Hello, this is a test run." --- # tts_transformer-en-ljspeech [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - English - Single-speaker female voice - Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/tts_transformer-en-ljspeech", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Hello, this is a test run." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
facebook/fastspeech2-en-ljspeech
facebook
2022-01-28T23:25:24Z
2,168
268
fairseq
[ "fairseq", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:2006.04558", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: en datasets: - ljspeech widget: - text: "Hello, this is a test run." example_title: "Hello, this is a test run." --- # fastspeech2-en-ljspeech [FastSpeech 2](https://arxiv.org/abs/2006.04558) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - English - Single-speaker female voice - Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Hello, this is a test run." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
Kneecapsnatcher/Unon
Kneecapsnatcher
2022-01-28T21:21:10Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: bsd-2-clause ---
Langame/distilgpt2-starter
Langame
2022-01-28T21:03:53Z
18
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "dataset:Langame/starter", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - Langame/starter model-index: - name: distilgpt2-starter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-starter This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the Langame/starter dataset. It achieves the following results on the evaluation set: - Loss: 6.0234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 66.67 | 200 | 3.6445 | | No log | 133.33 | 400 | 4.5703 | | 1.0101 | 200.0 | 600 | 5.2109 | | 1.0101 | 266.67 | 800 | 5.5430 | | 0.0681 | 333.33 | 1000 | 5.7227 | | 0.0681 | 400.0 | 1200 | 5.8672 | | 0.0681 | 466.67 | 1400 | 5.9961 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.11.0
anjulRajendraSharma/WavLm-base-en
anjulRajendraSharma
2022-01-28T16:40:52Z
58
0
transformers
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "english_asr", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - english_asr - generated_from_trainer model-index: - name: wavlm-base-english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-base-english This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the english_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0955 - Wer: 0.0773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8664 | 0.17 | 300 | 2.8439 | 1.0 | | 0.5009 | 0.34 | 600 | 0.2709 | 0.2162 | | 0.2056 | 0.5 | 900 | 0.1934 | 0.1602 | | 0.1648 | 0.67 | 1200 | 0.1576 | 0.1306 | | 0.1922 | 0.84 | 1500 | 0.1358 | 0.1114 | | 0.093 | 1.01 | 1800 | 0.1277 | 0.1035 | | 0.0652 | 1.18 | 2100 | 0.1251 | 0.1005 | | 0.0848 | 1.35 | 2400 | 0.1188 | 0.0964 | | 0.0706 | 1.51 | 2700 | 0.1091 | 0.0905 | | 0.0846 | 1.68 | 3000 | 0.1018 | 0.0840 | | 0.0684 | 1.85 | 3300 | 0.0978 | 0.0809 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.0 - Tokenizers 0.10.3
alperiox/autonlp-user-review-classification-536415182
alperiox
2022-01-28T16:30:08Z
9
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:alperiox/autonlp-data-user-review-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - alperiox/autonlp-data-user-review-classification co2_eq_emissions: 1.268309634217171 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 536415182 - CO2 Emissions (in grams): 1.268309634217171 ## Validation Metrics - Loss: 0.44733062386512756 - Accuracy: 0.8873239436619719 - Macro F1: 0.8859416445623343 - Micro F1: 0.8873239436619719 - Weighted F1: 0.8864646766540891 - Macro Precision: 0.8848522167487685 - Micro Precision: 0.8873239436619719 - Weighted Precision: 0.8883299798792756 - Macro Recall: 0.8908045977011494 - Micro Recall: 0.8873239436619719 - Weighted Recall: 0.8873239436619719 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alperiox/autonlp-user-review-classification-536415182 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Rocketknight1/distilgpt2-finetuned-wikitext2
Rocketknight1
2022-01-28T13:23:20Z
14
0
transformers
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Rocketknight1/distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8577 - Validation Loss: 3.6752 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8577 | 3.6752 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
huggingtweets/cobie-coinerstakingls
huggingtweets
2022-01-28T11:19:03Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/cobie-coinerstakingls/1643368738479/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1394891459900231689/xXdX3yWP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1471649307887558661/SpH6Dho7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Crypto Bros Taking Ls & Cobie</div> <div style="text-align: center; font-size: 14px;">@cobie-coinerstakingls</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Crypto Bros Taking Ls & Cobie. | Data | Crypto Bros Taking Ls | Cobie | | --- | --- | --- | | Tweets downloaded | 566 | 3248 | | Retweets | 94 | 93 | | Short tweets | 222 | 500 | | Tweets kept | 250 | 2655 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gjf29z1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cobie-coinerstakingls's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c8xc9umf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c8xc9umf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/cobie-coinerstakingls') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
google/vit-large-patch32-384
google
2022-01-28T10:24:24Z
186,213
16
transformers
[ "transformers", "pytorch", "tf", "jax", "vit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384. Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch32-384') model = ViTForImageClassification.from_pretrained('google/vit-large-patch32-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
google/vit-large-patch16-384
google
2022-01-28T10:22:26Z
8,875
12
transformers
[ "transformers", "pytorch", "tf", "jax", "vit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch16-384') model = ViTForImageClassification.from_pretrained('google/vit-large-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
microsoft/beit-large-patch16-512
microsoft
2022-01-28T10:20:07Z
824
9
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (large-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 512x512. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-512') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-512') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
microsoft/beit-large-patch16-384
microsoft
2022-01-28T10:19:50Z
242
0
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (large-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-384') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
microsoft/beit-base-patch16-384
microsoft
2022-01-28T10:19:30Z
409
5
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (base-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-384') model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
microsoft/beit-large-patch16-224
microsoft
2022-01-28T10:19:16Z
1,916
1
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (large-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-224') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
hrdipto/wav2vec2-xls-r-tf-left-right-shuru-word-level
hrdipto
2022-01-28T09:54:27Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-tf-left-right-shuru-word-level results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-shuru-word-level This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0504 - Wer: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 23.217 | 23.81 | 500 | 1.3437 | 0.6859 | | 1.1742 | 47.62 | 1000 | 1.0397 | 0.6859 | | 1.0339 | 71.43 | 1500 | 1.0155 | 0.6859 | | 0.9909 | 95.24 | 2000 | 1.0504 | 0.6859 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
RASMUS/wav2vec2-xlsr-fi-train-aug-bigLM-1B
RASMUS
2022-01-27T23:00:16Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "mozilla-foundation/common_voice_7_0", "audio", "speech", "fi", "dataset:mozilla-foundation/common_voice_7_0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: fi datasets: - mozilla-foundation/common_voice_7_0 metrics: - wer - cer tags: - generated_from_trainer - mozilla-foundation/common_voice_7_0 - audio - automatic-speech-recognition - speech model-index: - name: XLS-R 1B Wav2Vec2 Finnish by Rasmus Toivanen results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: fi --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-fi-train-aug-lm-1B This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1499 - Wer: 0.1955 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 | | 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 | | 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 | | 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 | | 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 | | 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 | | 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 | | 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 | | 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 | | 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 | | 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 | | 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 | | 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
huggingtweets/glitchy22
huggingtweets
2022-01-27T21:05:00Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/glitchy22/1643317484748/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484899984126451716/oY7g67aC_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">💙💗🤍 Mama Ava's House of Fun 💙💗🤍</div> <div style="text-align: center; font-size: 14px;">@glitchy22</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 💙💗🤍 Mama Ava's House of Fun 💙💗🤍. | Data | 💙💗🤍 Mama Ava's House of Fun 💙💗🤍 | | --- | --- | | Tweets downloaded | 1690 | | Retweets | 198 | | Short tweets | 387 | | Tweets kept | 1105 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2h5yvnyr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glitchy22's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t3bkiiv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t3bkiiv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/glitchy22') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
vuiseng9/wav2vec2-base-100h
vuiseng9
2022-01-27T20:03:25Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "en", "dataset:librispeech_asr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition license: apache-2.0 --- # Wav2Vec2-Base-100h This is a fork of [```facebook/wav2vec2-base-100h```](https://huggingface.co/facebook/wav2vec2-base-100h) ### Changes & Notes 1. Document reproducible evaluation (below) to new transformer and datasets version. 2. Use batch size of 1 to reproduce results. 3. Validated with ```transformers v4.15.0```, ```datasets 1.18.0``` 4. You may need to manually install pypkg ```librosa```, ```jiwer``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import soundfile as sf import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # librispeech_eval = load_dataset("librispeech_asr", "other", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h") def map_to_array(batch): # speech, _ = sf.read(batch["file"]) # batch["speech"] = speech batch["speech"] = batch['audio']['array'] return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): input_values = processor(batch["speech"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean/test" | "other/test" | |--------------| ------------| | 6.1 | 13.5 |
huggingtweets/thenamefaceless
huggingtweets
2022-01-27T19:59:10Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/thenamefaceless/1643313546109/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1428260501016834056/u8xbVi4l_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Faceless</div> <div style="text-align: center; font-size: 14px;">@thenamefaceless</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Faceless. | Data | Faceless | | --- | --- | | Tweets downloaded | 581 | | Retweets | 165 | | Short tweets | 55 | | Tweets kept | 361 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1i6xge70/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thenamefaceless's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bbby02j) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bbby02j/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/thenamefaceless') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
cd-dvd/testmodel2
cd-dvd
2022-01-27T19:45:14Z
5
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "Text Generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - Text Generation --- # GIMPLEARN knows modeltest2 # To generate conversation use input such as Human: What should I do?\nAI:
Rocketknight1/t5-small-finetuned-xsum
Rocketknight1
2022-01-27T19:39:43Z
4
0
transformers
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Rocketknight1/t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7172 - Validation Loss: 2.3977 - Train Rouge1: 28.7469 - Train Rouge2: 7.9005 - Train Rougel: 22.5917 - Train Rougelsum: 22.6162 - Train Gen Len: 18.875 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 2.7172 | 2.3977 | 28.7469 | 7.9005 | 22.5917 | 22.6162 | 18.875 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
vkhangpham/shopee-ner
vkhangpham
2022-01-27T19:15:22Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: shopee-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shopee-ner This model is a fine-tuned version of [cahya/xlm-roberta-base-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-base-indonesian-NER) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2046 - Precision: 0.7666 - Recall: 0.8666 - F1: 0.8135 - Accuracy: 0.9320 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2282 | 1.0 | 33750 | 0.2174 | 0.7443 | 0.8506 | 0.7939 | 0.9253 | | 0.1983 | 2.0 | 67500 | 0.2046 | 0.7666 | 0.8666 | 0.8135 | 0.9320 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
mrm8488/ppo-BipedalWalker-v3
mrm8488
2022-01-27T19:12:00Z
0
2
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
#@title --- tags: - bipedal - walker - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 --- # PPO BipedalWalker v3 🤖🚶🏼 This is a pre-trained model of a PPO agent playing BipedalWalker-v3 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library. <video loop="" autoplay="" controls="" src="https://huggingface.co/mrm8488/ppo-BipedalWalker-v3/resolve/main/output.mp4"></video> ### Usage (with Stable-baselines3) Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed: ``` pip install stable-baselines3 pip install huggingface_sb3 ``` Then, you can use the model like this: ```python import gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy # Retrieve the model from the hub ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename = name of the model zip file from the repository checkpoint = load_from_hub(repo_id="mrm8488/ppo-BipedalWalker-v3", filename="bipedalwalker-v3.zip") model = PPO.load(checkpoint) # Evaluate the agent eval_env = gym.make('{environment}') mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") # Watch the agent play obs = env.reset() for i in range(1000): action, _state = model.predict(obs) obs, reward, done, info = env.step(action) env.render() if done: obs = env.reset() env.close() ``` ### Evaluation Results Mean_reward: 213.55 +/- 113.82
Jacobo/aristoBERTo
Jacobo
2022-01-27T19:02:16Z
10
5
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "grc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- tags: language: - grc model-index: - name: aristoBERTo results: [] widget: - text: "Πλάτων ὁ Περικτιόνης [MASK] γένος ἀνέφερεν εἰς Σόλωνα." - text: "ὁ Κριτίας ἀπέβλεψε [MASK] τὴν θύραν." - text: "πρῶτοι δὲ καὶ οὐνόματα ἱρὰ ἔγνωσαν καὶ [MASK] ἱροὺς ἔλεξαν." --- # aristoBERTo aristoBERTo is a transformer model for ancient Greek, a low resource language. We initialized the pre-training with weights from [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1), a Greek version of BERT which was trained on a large corpus of modern Greek (~ 30 GB of texts). We continued the pre-training with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed. Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA. aristoBERTo is provided by the [Diogenet project](https://diogenet.ucsd.edu) of the University of California, San Diego. ## Intended uses This model was created for fine-tuning with spaCy and the ancient Greek Universal Dependency datasets as well as a NER corpus produced by the [Diogenet project](https://diogenet.ucsd.edu). As a fill-mask model, AristoBERTo can also be used in the restoration of damaged Greek papyri, inscriptions, and manuscripts. It achieves the following results on the evaluation set: - Loss: 1.6323 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 1.377 | 20.0 | 3414220 | 1.6314 | ### Framework versions - Transformers 4.14.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
anirudh21/albert-large-v2-finetuned-rte
anirudh21
2022-01-27T18:29:58Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: albert-large-v2-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.5487364620938628 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2-finetuned-rte This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6827 - Accuracy: 0.5487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 18 | 0.6954 | 0.5271 | | No log | 2.0 | 36 | 0.6860 | 0.5379 | | No log | 3.0 | 54 | 0.6827 | 0.5487 | | No log | 4.0 | 72 | 0.7179 | 0.5235 | | No log | 5.0 | 90 | 0.7504 | 0.5379 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
mbateman/marian-finetuned-kde4-en-to-fr
mbateman
2022-01-27T17:33:02Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
Adinda/Adinda
Adinda
2022-01-27T17:02:42Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: artistic-2.0 ---
huggingtweets/northernlion
huggingtweets
2022-01-27T16:46:04Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/northernlion/1643301960230/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2236512789/ChannelIcon_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ryan Letourneau</div> <div style="text-align: center; font-size: 14px;">@northernlion</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ryan Letourneau. | Data | Ryan Letourneau | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 85 | | Short tweets | 480 | | Tweets kept | 2684 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xmzb7x7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @northernlion's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dilt40l) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dilt40l/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/northernlion') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
bayartsogt/tts_transformer-mn-mbspeech
bayartsogt
2022-01-27T16:35:40Z
18
1
fairseq
[ "fairseq", "audio", "text-to-speech", "mn", "dataset:mbspeech", "arxiv:1809.08895", "arxiv:2109.06912", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: mn datasets: - mbspeech widget: - text: "миний нэрийг баярцогт гэдэг" example_title: "Say my name!" - text: "би монгол улсын нийслэл, улаанбаатар хотод амьдардаг" example_title: "Where I am from?" - text: "энэхүү өгөгдлийг нээлттэй болгосон, болор соофтынхонд баярлалаа" example_title: "Thank you!" - text: "энэхүү ажлын ихэнх хэсгийг, төгөлдөр ах хийсэн болно" example_title: "Shout out to original creater" --- # tts_transformer-mn-mbspeech [Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - Mongolian - Single-speaker male voice - Trained on [MBSpeech](https://github.com/tugstugi/mongolian-nlp/blob/master/datasets/MBSpeech-1.0-csv.zip)
mrm8488/ppo-CartPole-v1
mrm8488
2022-01-27T15:13:48Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
#@title --- tags: - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 --- # PPO CartPole v1 🤖⚖️ This is a pre-trained model of a PPO agent playing CartPole-v1 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library. <video loop="" autoplay="" controls="" src="https://huggingface.co/mrm8488/ppo-CartPole-v1/resolve/main/output.mp4"></video> ### Usage (with Stable-baselines3) Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed: ``` pip install stable-baselines3 pip install huggingface_sb3 ``` Then, you can use the model like this: ```python import gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy # Retrieve the model from the hub ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename = name of the model zip file from the repository checkpoint = load_from_hub(repo_id="mrm8488/ppo-CartPole-v1", filename="cartpole-v1.zip") model = PPO.load(checkpoint) # Evaluate the agent eval_env = gym.make('CartPole-v1') mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") # Watch the agent play obs = env.reset() for i in range(1000): action, _state = model.predict(obs) obs, reward, done, info = env.step(action) env.render() if done: obs = env.reset() env.close() ``` ### Evaluation Results Mean_reward: mean_reward=500.00 +/- 0.0
jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom
jhonparra18
2022-01-27T14:58:01Z
15
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer - robust-speech-event datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-custom This model was trained from scratch on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2245 - eval_wer: 0.2082 - eval_runtime: 801.6784 - eval_samples_per_second: 18.822 - eval_steps_per_second: 2.354 - epoch: 0.76 - step: 8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
oskrmiguel/mt5-simplification-spanish
oskrmiguel
2022-01-27T13:32:24Z
22
6
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "simplification", "spanish", "es", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - es thumbnail: tags: - simplification - mt5 - spanish license: cc-by-nc-sa-4.0 metrics: - sari widget: - text: "La Simplificación Textual es el proceso de transformación de un texto a otro texto equivalente más comprensible para un determinado tipo de grupo o población." - text: "Los textos simplificados son apropiados para muchos grupos de lectores, como, por ejemplo: estudiantes de idiomas, personas con discapacidades intelectuales y otras personas con necesidades especiales de lectura y comprensión. " --- # mt5-simplification-spanish ## Model description This is a fine-tuned mt5-small model for generating simple text from complex text. This model was created with the IXA Group research group of the University of the Basque Country, the model has been evaluated with the Sari, Bleu and Fklg metrics; it was trained and tested using the [Simplext corpus](https://dl.acm.org/doi/10.1145/2738046). ## Dataset Simplext ## Model Evaluation Bleu: 13,186 Sari: 42,203 Fklg: 10,284 ## Authors Oscar M. Cumbicus-Pineda, Itziar Gonzalez-Dios, Aitor Soroa, November 2021 ## Code https://github.com/oskrmiguel/mt5-simplification
anirudh21/albert-xxlarge-v2-finetuned-wnli
anirudh21
2022-01-27T13:00:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: albert-xxlarge-v2-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5070422535211268 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxlarge-v2-finetuned-wnli This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6970 - Accuracy: 0.5070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 13 | 0.8066 | 0.4366 | | No log | 2.0 | 26 | 0.6970 | 0.5070 | | No log | 3.0 | 39 | 0.7977 | 0.4507 | | No log | 4.0 | 52 | 0.7906 | 0.4930 | | No log | 5.0 | 65 | 0.8459 | 0.4366 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
benjaminbeilharz/dialoGPT-small-empatheticdialogues-generation
benjaminbeilharz
2022-01-27T11:07:49Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "conversational", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en datasets: - empathetic dialogues tags: - conversational - pytorch - transformers - gpt2 license: mit --- Still figuring out to properly write model cards. WIP.
anirudh21/bert-base-uncased-finetuned-qnli
anirudh21
2022-01-27T08:21:03Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-qnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.791689547867472 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-qnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6268 - Accuracy: 0.7917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.5339 | 0.7620 | | No log | 2.0 | 126 | 0.4728 | 0.7866 | | No log | 3.0 | 189 | 0.5386 | 0.7847 | | No log | 4.0 | 252 | 0.6096 | 0.7904 | | No log | 5.0 | 315 | 0.6268 | 0.7917 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
anirudh21/bert-base-uncased-finetuned-rte
anirudh21
2022-01-27T06:57:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.6642599277978339 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8075 - Accuracy: 0.6643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.6777 | 0.5668 | | No log | 2.0 | 126 | 0.6723 | 0.6282 | | No log | 3.0 | 189 | 0.7238 | 0.6318 | | No log | 4.0 | 252 | 0.7993 | 0.6354 | | No log | 5.0 | 315 | 0.8075 | 0.6643 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
carlosaguayo/pegasus-samsum
carlosaguayo
2022-01-27T06:14:31Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7197 | 0.54 | 500 | 1.4842 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
anas-awadalla/bert-small-pretrained-finetuned-squad
anas-awadalla
2022-01-27T06:09:41Z
30
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: bert-small-pretrained-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-pretrained-finetuned-squad This model is a fine-tuned version of [anas-awadalla/bert-small-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-small-pretrained-on-squad) on the squad dataset. - "exact_match": 72.20435193945127 - "f1": 81.31832229156294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-medium-pretrained-finetuned-squad
anas-awadalla
2022-01-27T06:07:11Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: bert_medium_pretrain_squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_medium_pretrain_squad This model is a fine-tuned version of [anas-awadalla/bert-medium-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-medium-pretrained-on-squad) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.0973 - "exact_match": 77.95648060548723 - "f1": 85.85300366384631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
sankhajay/mt5-base-sinaha-qa
sankhajay
2022-01-27T05:35:18Z
6
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
\n --- language: si tags: - question-answering - Sinhala widget: - context: "ශ්‍රී ලංකාව යනු ඉන්දියානු සාගරයේ පිහිටි මනරම් දුපතකි." text: "ශ්‍රී ලංකාව පිහිටා ඇත්තේ කොහෙද ?" --- # mt5-base-sinhala-qa This is an mt5-based Question Answering model for the Sinhalese language. Training is done on translated SQuAD dataset of 8k questions. The translation was done by google translate API. The training was done on Google Colab TPU environment with parallel training techniques. The training was done on around 9k data points which consists of context, question, answer trios for the Sinhala language. Evaluation is done using standard SQuAD evaluation script on around 1k data points which gave following results on the best parameter setting. Evaluation matrices used are EM matric and F1 score matric. Evaluation - {'EM': 39.413680781758956, 'f1': 66.16331104953571}
anirudh21/albert-large-v2-finetuned-wnli
anirudh21
2022-01-27T05:02:43Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: albert-large-v2-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5352112676056338 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2-finetuned-wnli This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6919 - Accuracy: 0.5352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 17 | 0.7292 | 0.4366 | | No log | 2.0 | 34 | 0.6919 | 0.5352 | | No log | 3.0 | 51 | 0.7084 | 0.4648 | | No log | 4.0 | 68 | 0.7152 | 0.5352 | | No log | 5.0 | 85 | 0.7343 | 0.5211 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.1 - Tokenizers 0.10.3
glob-asr/base-spanish-asr
glob-asr
2022-01-27T03:35:42Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-spanish-custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-spanish-custom This model was trained from scratch on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2245 - eval_wer: 0.2082 - eval_runtime: 801.6784 - eval_samples_per_second: 18.822 - eval_steps_per_second: 2.354 - epoch: 0.76 - step: 8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
boris/dalle-mini-tokenizer
boris
2022-01-27T01:42:39Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
Tokenizer based on `facebook/bart-large-cnn` and trained on captions normalized by [dalle-mini](https://github.com/borisdayma/dalle-mini).
Mingyi/classify_title_subject
Mingyi
2022-01-26T23:29:36Z
9
3
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: tmp6tsjsfbf results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp6tsjsfbf This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0178 - Train Sparse Categorical Accuracy: 0.9962 - Epoch: 49 ## Model description This model classifies the title of a content (e.g., YouTube video, article, or podcast episode) into 1 of 8 subjects 0. art 1. personal development 2. world 3. health 4. science 5. business 6. humanities 7. technology. This model is used to support [Sanderling](https://sanderling.app) ## Intended uses & limitations More information needed ## Training and evaluation data We used 1.5k labeled titles to train the model. Majority of the training dataset are English titles. The rest are Chinese titles. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:-----:| | 1.8005 | 0.3956 | 0 | | 1.3302 | 0.5916 | 1 | | 0.8998 | 0.7575 | 2 | | 0.6268 | 0.8468 | 3 | | 0.4239 | 0.9062 | 4 | | 0.2982 | 0.9414 | 5 | | 0.2245 | 0.9625 | 6 | | 0.1678 | 0.9730 | 7 | | 0.1399 | 0.9745 | 8 | | 0.1059 | 0.9827 | 9 | | 0.0822 | 0.9850 | 10 | | 0.0601 | 0.9902 | 11 | | 0.0481 | 0.9932 | 12 | | 0.0386 | 0.9955 | 13 | | 0.0292 | 0.9977 | 14 | | 0.0353 | 0.9940 | 15 | | 0.0336 | 0.9932 | 16 | | 0.0345 | 0.9910 | 17 | | 0.0179 | 0.9985 | 18 | | 0.0150 | 0.9985 | 19 | | 0.0365 | 0.9895 | 20 | | 0.0431 | 0.9895 | 21 | | 0.0243 | 0.9955 | 22 | | 0.0317 | 0.9925 | 23 | | 0.0375 | 0.9902 | 24 | | 0.0138 | 0.9970 | 25 | | 0.0159 | 0.9977 | 26 | | 0.0160 | 0.9962 | 27 | | 0.0151 | 0.9977 | 28 | | 0.0337 | 0.9902 | 29 | | 0.0119 | 0.9977 | 30 | | 0.0165 | 0.9955 | 31 | | 0.0133 | 0.9977 | 32 | | 0.0047 | 1.0 | 33 | | 0.0037 | 1.0 | 34 | | 0.0033 | 1.0 | 35 | | 0.0031 | 1.0 | 36 | | 0.0036 | 1.0 | 37 | | 0.0343 | 0.9887 | 38 | | 0.0234 | 0.9962 | 39 | | 0.0034 | 1.0 | 40 | | 0.0036 | 1.0 | 41 | | 0.0261 | 0.9917 | 42 | | 0.0111 | 0.9970 | 43 | | 0.0039 | 1.0 | 44 | | 0.0214 | 0.9932 | 45 | | 0.0044 | 0.9985 | 46 | | 0.0122 | 0.9985 | 47 | | 0.0119 | 0.9962 | 48 | | 0.0178 | 0.9962 | 49 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Tokenizers 0.10.3
Firat/distilbert-base-uncased-finetuned-squad
Firat
2022-01-26T19:05:23Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2856 | 1.0 | 2767 | 1.1919 | | 1.012 | 2.0 | 5534 | 1.1332 | | 0.8512 | 3.0 | 8301 | 1.1460 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.18.0 - Tokenizers 0.10.3
asahi417/tner-roberta-large-multiconer-en-adapter
asahi417
2022-01-26T16:13:58Z
10
0
adapter-transformers
[ "adapter-transformers", "adapterhub:named-entity-recognition/multiconer", "roberta", "dataset:multiconer", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - adapter-transformers - adapterhub:named-entity-recognition/multiconer - roberta datasets: - multiconer --- # Adapter `asahi417/tner-roberta-large-multiconer-en-adapter` for roberta-large An [adapter](https://adapterhub.ml) for the `roberta-large` model that was trained on the [named-entity-recognition/multiconer](https://adapterhub.ml/explore/named-entity-recognition/multiconer/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-large") adapter_name = model.load_adapter("asahi417/tner-roberta-large-multiconer-en-adapter", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
asahi417/tner-xlm-roberta-large-multiconer-multi-adapter
asahi417
2022-01-26T15:46:42Z
3
0
adapter-transformers
[ "adapter-transformers", "adapterhub:named-entity-recognition/multiconer", "xlm-roberta", "dataset:multiconer", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - adapter-transformers - adapterhub:named-entity-recognition/multiconer - xlm-roberta datasets: - multiconer --- # Adapter `asahi417/tner-xlm-roberta-large-multiconer-multi-adapter` for xlm-roberta-large An [adapter](https://adapterhub.ml) for the `xlm-roberta-large` model that was trained on the [named-entity-recognition/multiconer](https://adapterhub.ml/explore/named-entity-recognition/multiconer/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("xlm-roberta-large") adapter_name = model.load_adapter("asahi417/tner-xlm-roberta-large-multiconer-multi-adapter", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
anirudh21/albert-xlarge-v2-finetuned-mrpc
anirudh21
2022-01-26T12:50:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: albert-xlarge-v2-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7132352941176471 - name: F1 type: f1 value: 0.8145800316957211 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xlarge-v2-finetuned-mrpc This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5563 - Accuracy: 0.7132 - F1: 0.8146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 63 | 0.6898 | 0.5221 | 0.6123 | | No log | 2.0 | 126 | 0.6298 | 0.6838 | 0.8122 | | No log | 3.0 | 189 | 0.6043 | 0.7010 | 0.8185 | | No log | 4.0 | 252 | 0.5834 | 0.7010 | 0.8146 | | No log | 5.0 | 315 | 0.5563 | 0.7132 | 0.8146 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
krirk/wav2vec2-large-xls-r-300m-turkish-colab
krirk
2022-01-26T12:38:32Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3942 - Wer: 0.3149 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9921 | 3.67 | 400 | 0.7820 | 0.7857 | | 0.4496 | 7.34 | 800 | 0.4630 | 0.4977 | | 0.2057 | 11.01 | 1200 | 0.4293 | 0.4627 | | 0.1328 | 14.68 | 1600 | 0.4464 | 0.4068 | | 0.1009 | 18.35 | 2000 | 0.4461 | 0.3742 | | 0.0794 | 22.02 | 2400 | 0.4328 | 0.3467 | | 0.0628 | 25.69 | 2800 | 0.4036 | 0.3263 | | 0.0497 | 29.36 | 3200 | 0.3942 | 0.3149 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
SetFit/MiniLM-L12-H384-uncased__sst2__all-train
SetFit
2022-01-26T11:27:47Z
12
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: MiniLM-L12-H384-uncased__sst2__all-train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniLM-L12-H384-uncased__sst2__all-train This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2632 - Accuracy: 0.9055 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4183 | 1.0 | 433 | 0.3456 | 0.8720 | | 0.2714 | 2.0 | 866 | 0.2632 | 0.9055 | | 0.2016 | 3.0 | 1299 | 0.3357 | 0.8990 | | 0.1501 | 4.0 | 1732 | 0.4474 | 0.8863 | | 0.1119 | 5.0 | 2165 | 0.3998 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
jcmc/wav2vec2-large-xlsr-53-ir
jcmc
2022-01-26T10:35:17Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ga-IE license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset. It achieves the following results on the evaluation set: - Loss: 1.0835 - Wer: 0.7490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1483 | 15.62 | 500 | 3.0498 | 1.0 | | 2.8449 | 31.25 | 1000 | 2.7790 | 0.9493 | | 1.8683 | 46.86 | 1500 | 1.2339 | 0.8161 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
anirudh21/albert-xlarge-v2-finetuned-wnli
anirudh21
2022-01-26T08:43:31Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: albert-xlarge-v2-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xlarge-v2-finetuned-wnli This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6869 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6906 | 0.5070 | | No log | 2.0 | 80 | 0.6869 | 0.5634 | | No log | 3.0 | 120 | 0.6905 | 0.5352 | | No log | 4.0 | 160 | 0.6960 | 0.4225 | | No log | 5.0 | 200 | 0.7011 | 0.3803 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
gullenasatish/wav2vec2-base-timit-demo-colab
gullenasatish
2022-01-26T08:36:41Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4872 - Wer: 0.3417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4857 | 4.0 | 500 | 1.4555 | 1.0040 | | 0.5994 | 8.0 | 1000 | 0.5011 | 0.4370 | | 0.2273 | 12.0 | 1500 | 0.4293 | 0.3903 | | 0.1235 | 16.0 | 2000 | 0.4602 | 0.3772 | | 0.084 | 20.0 | 2500 | 0.5055 | 0.3673 | | 0.0615 | 24.0 | 3000 | 0.4915 | 0.3486 | | 0.0468 | 28.0 | 3500 | 0.4872 | 0.3417 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
danielbubiola/bangla_asr
danielbubiola
2022-01-26T07:42:22Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: bangla_asr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla_asr This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200) on the None dataset. It achieves the following results on the evaluation set: - Loss: 157.8652 - Wer: 0.4507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2601.5363 | 7.46 | 500 | 259.6630 | 0.6863 | | 417.7386 | 14.93 | 1000 | 156.6117 | 0.5275 | | 262.9455 | 22.39 | 1500 | 155.0886 | 0.5006 | | 178.7715 | 29.85 | 2000 | 155.1077 | 0.4840 | | 132.448 | 37.31 | 2500 | 163.8623 | 0.4770 | | 116.3943 | 44.78 | 3000 | 161.5531 | 0.4609 | | 87.1653 | 52.24 | 3500 | 165.6857 | 0.4597 | | 80.5606 | 59.7 | 4000 | 157.8652 | 0.4507 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
kxiaoqiangrexian/bert_test
kxiaoqiangrexian
2022-01-26T06:52:37Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
vuiseng9/bert-mnli
vuiseng9
2022-01-26T06:48:02Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
This model is developed with transformers v4.9.1. ``` m = 0.8444 eval_samples = 9815 mm = 0.8495 eval_samples = 9832 ``` # Train ```bash #!/usr/bin/env bash export CUDA_VISIBLE_DEVICES=0 OUTDIR=bert-mnli NEPOCH=3 WORKDIR=transformers/examples/pytorch/text-classification cd $WORKDIR python run_glue.py \ --model_name_or_path bert-base-uncased \ --task_name mnli \ --max_seq_length 128 \ --do_train \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs $NEPOCH \ --logging_steps 1 \ --evaluation_strategy steps \ --save_steps 3000 \ --do_eval \ --per_device_eval_batch_size 128 \ --eval_steps 250 \ --output_dir $OUTDIR --overwrite_output_dir ``` # Eval ```bash export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-mnli WORKDIR=transformers/examples/pytorch/text-classification cd $WORKDIR nohup python run_glue.py \ --model_name_or_path vuiseng9/bert-mnli \ --task_name mnli \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
GleamEyeBeast/test
GleamEyeBeast
2022-01-26T04:38:42Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1761 - Wer: 0.2161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.5828 | 4.0 | 500 | 3.0263 | 1.0 | | 1.8657 | 8.0 | 1000 | 0.2213 | 0.2650 | | 0.332 | 12.0 | 1500 | 0.2095 | 0.2413 | | 0.2037 | 16.0 | 2000 | 0.1906 | 0.2222 | | 0.1282 | 20.0 | 2500 | 0.1761 | 0.2161 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ziqingyang/XLMRobertaBaseForXNLI-en
ziqingyang
2022-01-26T02:03:42Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
chmanoj/xls-r-300m-sv
chmanoj
2022-01-26T00:01:07Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - sv-SE license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset. It achieves the following results on the evaluation set: - Loss: 0.8004 - Wer: 0.7139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.6683 | 1.45 | 500 | 1.7698 | 1.0041 | | 1.9548 | 2.91 | 1000 | 1.0890 | 0.8602 | | 1.9568 | 4.36 | 1500 | 1.0878 | 0.8680 | | 1.9497 | 5.81 | 2000 | 1.1501 | 0.8838 | | 1.8453 | 7.27 | 2500 | 1.0452 | 0.8418 | | 1.6952 | 8.72 | 3000 | 0.9153 | 0.7823 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.18.1.dev0 - Tokenizers 0.10.3
jiobiala24/wav2vec2-base-checkpoint-9
jiobiala24
2022-01-25T19:52:35Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-9 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-8](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-8) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9203 - Wer: 0.3258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2783 | 1.58 | 1000 | 0.5610 | 0.3359 | | 0.2251 | 3.16 | 2000 | 0.5941 | 0.3374 | | 0.173 | 4.74 | 3000 | 0.6026 | 0.3472 | | 0.1475 | 6.32 | 4000 | 0.6750 | 0.3482 | | 0.1246 | 7.9 | 5000 | 0.6673 | 0.3414 | | 0.1081 | 9.48 | 6000 | 0.7072 | 0.3409 | | 0.1006 | 11.06 | 7000 | 0.7413 | 0.3392 | | 0.0879 | 12.64 | 8000 | 0.7831 | 0.3394 | | 0.0821 | 14.22 | 9000 | 0.7371 | 0.3333 | | 0.0751 | 15.8 | 10000 | 0.8321 | 0.3445 | | 0.0671 | 17.38 | 11000 | 0.8362 | 0.3357 | | 0.0646 | 18.96 | 12000 | 0.8709 | 0.3367 | | 0.0595 | 20.54 | 13000 | 0.8352 | 0.3321 | | 0.0564 | 22.12 | 14000 | 0.8854 | 0.3323 | | 0.052 | 23.7 | 15000 | 0.9031 | 0.3315 | | 0.0485 | 25.28 | 16000 | 0.9171 | 0.3278 | | 0.046 | 26.86 | 17000 | 0.9390 | 0.3254 | | 0.0438 | 28.44 | 18000 | 0.9203 | 0.3258 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3