modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
βŒ€
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
βŒ€
likes
float64
0
712
βŒ€
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
gaunernst/bert-L12-H256-uncased
87dd2de5d342ae985eee7078380e6f5b06b41bb0
2022-07-02T08:54:10.000Z
[ "pytorch", "bert", "transformers", "license:apache-2.0" ]
null
false
gaunernst
null
gaunernst/bert-L12-H256-uncased
2
null
transformers
26,500
--- license: apache-2.0 ---
huggingtweets/crimseyvt
d0e03351665c501a6b33538fd7f7fd1ba729bfca
2022-07-02T10:12:34.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/crimseyvt
2
null
transformers
26,501
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1388858833582297095/5_Fg641d_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">CrimseyVT~</div> <div style="text-align: center; font-size: 14px;">@crimseyvt</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from CrimseyVT~. | Data | CrimseyVT~ | | --- | --- | | Tweets downloaded | 1417 | | Retweets | 195 | | Short tweets | 182 | | Tweets kept | 1040 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vwlwiq1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @crimseyvt's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x7shpw89) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x7shpw89/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/crimseyvt') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Neha2608/xlm-roberta-base-finetuned-panx-de-fr
0ab399fc94ef25934d18cd69a0629fe8a3ea5896
2022-07-02T11:39:59.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Neha2608
null
Neha2608/xlm-roberta-base-finetuned-panx-de-fr
2
null
transformers
26,502
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1644 - F1: 0.8617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 | | 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 | | 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Neha2608/xlm-roberta-base-finetuned-panx-fr
5e1ded7dc4b8058a48a2a0b5c9aeed423232902e
2022-07-02T11:59:36.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Neha2608
null
Neha2608/xlm-roberta-base-finetuned-panx-fr
2
null
transformers
26,503
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.835464333781965 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2867 - F1: 0.8355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5817 | 1.0 | 191 | 0.3395 | 0.7854 | | 0.2617 | 2.0 | 382 | 0.2856 | 0.8278 | | 0.1708 | 3.0 | 573 | 0.2867 | 0.8355 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
SelamatPagi/xlm-roberta-base-finetuned-panx-de
f3d0a5864b97ca3f024d220bc2fae380a4ce136d
2022-07-02T12:16:51.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
SelamatPagi
null
SelamatPagi/xlm-roberta-base-finetuned-panx-de
2
null
transformers
26,504
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8620945214069894 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1372 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 | | 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 | | 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Neha2608/xlm-roberta-base-finetuned-panx-en
aeec941bc63580d03400b22e152301e3742193eb
2022-07-02T12:35:18.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Neha2608
null
Neha2608/xlm-roberta-base-finetuned-panx-en
2
null
transformers
26,505
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.692179700499168 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3921 - F1: 0.6922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 | | 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 | | 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
kidzy/distilbert-base-uncased-distilled-clinc
b4983ed5c7ada698b02ce5c33a656a12e7726a3a
2022-07-02T14:18:20.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:clinc_oos", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
kidzy
null
kidzy/distilbert-base-uncased-distilled-clinc
2
null
transformers
26,506
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9470967741935484 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2653 - Accuracy: 0.9471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 1.5714 | 0.7371 | | 1.9106 | 2.0 | 636 | 0.7918 | 0.8655 | | 1.9106 | 3.0 | 954 | 0.4652 | 0.9110 | | 0.7184 | 4.0 | 1272 | 0.3420 | 0.9345 | | 0.3443 | 5.0 | 1590 | 0.3015 | 0.9439 | | 0.3443 | 6.0 | 1908 | 0.2834 | 0.9442 | | 0.2513 | 7.0 | 2226 | 0.2732 | 0.9445 | | 0.2214 | 8.0 | 2544 | 0.2693 | 0.9465 | | 0.2214 | 9.0 | 2862 | 0.2673 | 0.9452 | | 0.2117 | 10.0 | 3180 | 0.2653 | 0.9471 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
scaccomatto/autotrain-dataset-en-5-mini-1-50-truncate-1076038122
765bfea2b9d4f38d9d93612b12ba4492ffe543ab
2022-07-02T14:59:36.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:scaccomatto/autotrain-data-dataset-en-5-mini-1-50-truncate", "transformers", "autotrain", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
scaccomatto
null
scaccomatto/autotrain-dataset-en-5-mini-1-50-truncate-1076038122
2
null
transformers
26,507
--- tags: autotrain language: en widget: - text: "I love AutoTrain πŸ€—" datasets: - scaccomatto/autotrain-data-dataset-en-5-mini-1-50-truncate co2_eq_emissions: 6.1987408118248375 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1076038122 - CO2 Emissions (in grams): 6.1987408118248375 ## Validation Metrics - Loss: 0.5054866671562195 - Rouge1: 76.4469 - Rouge2: 72.6874 - RougeL: 76.3128 - RougeLsum: 76.2952 - Gen Len: 19.3856 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/scaccomatto/autotrain-dataset-en-5-mini-1-50-truncate-1076038122 ```
scaccomatto/autotrain-dataset-en-5-mini-1-50-num-1076338146
d13a0053bcd9bc1a2ac2e1323aaa809449056602
2022-07-02T15:13:42.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:scaccomatto/autotrain-data-dataset-en-5-mini-1-50-num", "transformers", "autotrain", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
scaccomatto
null
scaccomatto/autotrain-dataset-en-5-mini-1-50-num-1076338146
2
null
transformers
26,508
--- tags: autotrain language: en widget: - text: "I love AutoTrain πŸ€—" datasets: - scaccomatto/autotrain-data-dataset-en-5-mini-1-50-num co2_eq_emissions: 5.239170170576799 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 1076338146 - CO2 Emissions (in grams): 5.239170170576799 ## Validation Metrics - Loss: 0.6177766919136047 - Rouge1: 76.4034 - Rouge2: 72.6118 - RougeL: 76.233 - RougeLsum: 76.2601 - Gen Len: 18.6275 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/scaccomatto/autotrain-dataset-en-5-mini-1-50-num-1076338146 ```
tner/roberta-large-tweetner-2021
a8043252a4628563ec9c63092aa98346234d241f
2022-07-07T03:21:12.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/roberta-large-tweetner-2021
2
null
transformers
26,509
Entry not found
tner/roberta-large-tweetner-2020-2021-concat
74372e7bba305489c7bf4e7ae9e698901b35c1ba
2022-07-07T23:30:54.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/roberta-large-tweetner-2020-2021-concat
2
null
transformers
26,510
Entry not found
gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446
035aff603d9c1a161fe7642425a75bc4dd4b4fce
2022-07-02T23:02:11.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:gabrielaltay/autotrain-data-at-test-bb-tmp-scitail", "dataset:bigscience-biomedical/tmp-scitail", "transformers", "autotrain", "model-index", "co2_eq_emissions" ]
text-classification
false
gabrielaltay
null
gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446
2
null
transformers
26,511
--- tags: autotrain language: en widget: - text: "I love AutoTrain \U0001F917" datasets: - gabrielaltay/autotrain-data-at-test-bb-tmp-scitail - bigscience-biomedical/tmp-scitail co2_eq_emissions: 0.030427681636382462 model-index: - name: gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446 results: - task: type: text-classification name: Text Classification dataset: name: bigscience-biomedical/tmp-scitail type: bigscience-biomedical/tmp-scitail config: scitail_bigbio_te split: test metrics: - name: Accuracy type: accuracy value: 0.7714016933207902 verified: true - name: Precision type: precision value: 0.7829787234042553 verified: true - name: Recall type: recall value: 0.8598130841121495 verified: true - name: AUC type: auc value: 0.8606862462169141 verified: true - name: F1 type: f1 value: 0.8195991091314032 verified: true - name: loss type: loss value: 0.46928563714027405 verified: true --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1078438446 - CO2 Emissions (in grams): 0.030427681636382462 ## Validation Metrics - Loss: 0.440134197473526 - Accuracy: 0.808282208588957 - Precision: 0.7823613086770982 - Recall: 0.8500772797527048 - AUC: 0.8850060812225493 - F1: 0.8148148148148148 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("gabrielaltay/autotrain-at-test-bb-tmp-scitail-1078438446", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Elliotte/Hubert-base-superb
ea736d410031f0a6050c8ea84bf932a4fd6fa64b
2022-07-03T15:27:20.000Z
[ "pytorch", "tensorboard", "hubert", "automatic-speech-recognition", "dataset:superb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Elliotte
null
Elliotte/Hubert-base-superb
2
null
transformers
26,512
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb model-index: - name: Hubert-base-superb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hubert-base-superb This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.6712 - Wer: 0.4781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7884 | 0.8 | 500 | 0.8900 | 0.6940 | | 0.6603 | 1.6 | 1000 | 0.7378 | 0.6103 | | 0.5401 | 2.4 | 1500 | 0.7107 | 0.5762 | | 0.4604 | 3.2 | 2000 | 0.6563 | 0.5320 | | 0.3936 | 4.0 | 2500 | 0.6315 | 0.5244 | | 0.3186 | 4.8 | 3000 | 0.6525 | 0.5007 | | 0.2727 | 5.6 | 3500 | 0.6553 | 0.4855 | | 0.2296 | 6.4 | 4000 | 0.6712 | 0.4781 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
tner/twitter-roberta-base-dec2021-tweetner-2021
8ec44074951852e525801524f8567f7897839cae
2022-07-07T10:12:44.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/twitter-roberta-base-dec2021-tweetner-2021
2
null
transformers
26,513
Entry not found
tner/twitter-roberta-base-dec2021-tweetner-2020-2021-concat
fe5a1d21c56e627a9dc6c855b56a54c794f1ad37
2022-07-07T18:02:51.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/twitter-roberta-base-dec2021-tweetner-2020-2021-concat
2
null
transformers
26,514
Entry not found
tner/roberta-base-tweetner-2021
bb5afe99c8478ab5875abaefc18e50d848817036
2022-07-11T22:23:52.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
tner
null
tner/roberta-base-tweetner-2021
2
null
transformers
26,515
Entry not found
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_70
cf98befa10c7b13e45429a896f944424f92b4971
2022-07-03T13:29:19.000Z
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
BBarbarestani
null
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_70
2
null
transformers
26,516
Entry not found
shubhamitra/tmp
ca8a73efadd6853b97356eede03ce7e9738de94d
2022-07-03T12:50:54.000Z
[ "pytorch", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
shubhamitra
null
shubhamitra/tmp
2
null
transformers
26,517
--- tags: - generated_from_trainer model-index: - name: tmp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmp This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 123 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 1.0 | 498 | 0.0483 | 0.7486 | 0.8563 | 0.9171 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_50_2
ba8290712af856bdb497334807406b096b1ae479
2022-07-05T01:30:47.000Z
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
BBarbarestani
null
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_50_2
2
null
transformers
26,518
Entry not found
haesun/xlm-roberta-base-finetuned-panx-it
4f16e1f81484a16a756f8a5ed90b59e22f1055e9
2022-07-05T00:58:00.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
haesun
null
haesun/xlm-roberta-base-finetuned-panx-it
2
null
transformers
26,519
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8289473684210525 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2403 - F1: 0.8289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.668 | 1.0 | 105 | 0.2886 | 0.7818 | | 0.2583 | 2.0 | 210 | 0.2421 | 0.8202 | | 0.1682 | 3.0 | 315 | 0.2403 | 0.8289 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
haesun/xlm-roberta-base-finetuned-panx-en
bb189c77c622e451d3c001384a0de1d38c071d60
2022-07-05T01:11:50.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
haesun
null
haesun/xlm-roberta-base-finetuned-panx-en
2
null
transformers
26,520
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.6994475138121546 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3848 - F1: 0.6994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0435 | 1.0 | 74 | 0.5169 | 0.5532 | | 0.4719 | 2.0 | 148 | 0.4224 | 0.6630 | | 0.3424 | 3.0 | 222 | 0.3848 | 0.6994 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
haesun/xlm-roberta-base-finetuned-panx-all
c5464c48ab1d3b69b7aca57fe245f77bbc3ef575
2022-07-05T01:33:56.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
haesun
null
haesun/xlm-roberta-base-finetuned-panx-all
2
null
transformers
26,521
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1387 - F1: 0.8856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2967 | 1.0 | 1252 | 0.1817 | 0.8284 | | 0.1576 | 2.0 | 2504 | 0.1521 | 0.8597 | | 0.0996 | 3.0 | 3756 | 0.1387 | 0.8856 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
huggingtweets/mattyglesias
bc00853b152ec70a16c15c4fbac602ed98000cac
2022-07-04T22:20:15.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/mattyglesias
2
null
transformers
26,522
--- language: en thumbnail: http://www.huggingtweets.com/mattyglesias/1656973210167/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1516223147284082698/DbtV01ez_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Yglesias</div> <div style="text-align: center; font-size: 14px;">@mattyglesias</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Matthew Yglesias. | Data | Matthew Yglesias | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 408 | | Short tweets | 163 | | Tweets kept | 2678 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mo3hke3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattyglesias's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/491avjbi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/491avjbi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mattyglesias') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
google/owlvit-base-patch16
e1ab91248635e59130c75690e34433721095ec4d
2022-07-21T11:45:53.000Z
[ "pytorch", "owlvit", "transformers", "license:apache-2.0" ]
null
false
google
null
google/owlvit-base-patch16
2
null
transformers
26,523
--- license: apache-2.0 ---
google/owlvit-large-patch14
f8095a645b8638cf7757fe2d4fa040e0fc0c93db
2022-07-21T12:29:18.000Z
[ "pytorch", "owlvit", "transformers", "license:apache-2.0" ]
null
false
google
null
google/owlvit-large-patch14
2
null
transformers
26,524
--- license: apache-2.0 ---
HekmatTaherinejad/swin-tiny-patch4-window7-224-finetuned-eurosat
3a2309cdec5029ef9d4411741d02a074e6f55f46
2022-07-05T09:17:32.000Z
[ "pytorch", "tensorboard", "swin", "image-classification", "dataset:imagefolder", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
HekmatTaherinejad
null
HekmatTaherinejad/swin-tiny-patch4-window7-224-finetuned-eurosat
2
null
transformers
26,525
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder args: default metrics: - name: Accuracy type: accuracy value: 0.98 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0653 - Accuracy: 0.98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.203 | 1.0 | 190 | 0.1294 | 0.9574 | | 0.2017 | 2.0 | 380 | 0.0773 | 0.9763 | | 0.1563 | 3.0 | 570 | 0.0653 | 0.98 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
nawta/wav2vec2-onomatopoeia-finetune_smalldata3
18c708d57e4c2e5a4e889408f80f090b425ed1d6
2022-07-05T09:39:49.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
nawta
null
nawta/wav2vec2-onomatopoeia-finetune_smalldata3
2
null
transformers
26,526
Entry not found
arashba/xlm-roberta-base-finetuned-panx-de
f110474667be94a29a04c746b18c3010192f2497
2022-07-05T12:05:52.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
arashba
null
arashba/xlm-roberta-base-finetuned-panx-de
2
null
transformers
26,527
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8620945214069894 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1372 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 | | 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 | | 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jakka/t5_small_NCC_lm-finetuned-sv-frp-classifier-3
ac93d4d14bf066ae595f3df87fa2e1ff0bdee51a
2022-07-05T13:57:55.000Z
[ "pytorch", "t5", "text2text-generation", "dataset:norwegian_parliament", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
jakka
null
jakka/t5_small_NCC_lm-finetuned-sv-frp-classifier-3
2
null
transformers
26,528
--- license: apache-2.0 tags: - generated_from_trainer datasets: - norwegian_parliament model-index: - name: t5_small_NCC_lm-finetuned-sv-frp-classifier-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_small_NCC_lm-finetuned-sv-frp-classifier-3 This model is a fine-tuned version of [north/t5_small_NCC_lm](https://huggingface.co/north/t5_small_NCC_lm) on the norwegian_parliament dataset. It achieves the following results on the evaluation set: - Loss: nan - Sequence Accuracy: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sequence Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-----------------:| | No log | 1.0 | 113 | nan | 0.0 | | No log | 2.0 | 226 | nan | 0.0 | | No log | 3.0 | 339 | nan | 0.0 | | No log | 4.0 | 452 | nan | 0.0 | | 0.0 | 5.0 | 565 | nan | 0.0 | | 0.0 | 6.0 | 678 | nan | 0.0 | | 0.0 | 7.0 | 791 | nan | 0.0 | | 0.0 | 8.0 | 904 | nan | 0.0 | | 0.0 | 9.0 | 1017 | nan | 0.0 | | 0.0 | 10.0 | 1130 | nan | 0.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0 - Datasets 2.3.2 - Tokenizers 0.11.0
Eleven/xlm-roberta-base-finetuned-panx-fr
c3fc0c014e82b7ae93b11e6f570de21c4b06f441
2022-07-05T16:36:53.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Eleven
null
Eleven/xlm-roberta-base-finetuned-panx-fr
2
null
transformers
26,529
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.835464333781965 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2867 - F1: 0.8355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5817 | 1.0 | 191 | 0.3395 | 0.7854 | | 0.2617 | 2.0 | 382 | 0.2856 | 0.8278 | | 0.1708 | 3.0 | 573 | 0.2867 | 0.8355 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Eleven/xlm-roberta-base-finetuned-panx-it
f779dbb8e8a05314c7a6dd68b67a488829af7612
2022-07-05T16:53:50.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Eleven
null
Eleven/xlm-roberta-base-finetuned-panx-it
2
null
transformers
26,530
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8247845711940912 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2421 - F1: 0.8248 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.809 | 1.0 | 70 | 0.3380 | 0.7183 | | 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 | | 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Eleven/xlm-roberta-base-finetuned-panx-en
437bc45a0508dcd5a881c4644b4777a18ea93bf6
2022-07-05T17:09:52.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Eleven
null
Eleven/xlm-roberta-base-finetuned-panx-en
2
null
transformers
26,531
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.692179700499168 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3921 - F1: 0.6922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 | | 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 | | 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/donaldtusk
fc76e1125a644ffb7f08b972f0685e9f28dddafb
2022-07-05T20:21:55.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/donaldtusk
2
null
transformers
26,532
--- language: en thumbnail: http://www.huggingtweets.com/donaldtusk/1657052510922/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/990605878993793024/7uuCR4hP_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Donald Tusk</div> <div style="text-align: center; font-size: 14px;">@donaldtusk</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Donald Tusk. | Data | Donald Tusk | | --- | --- | | Tweets downloaded | 910 | | Retweets | 194 | | Short tweets | 32 | | Tweets kept | 684 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pclez81/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @donaldtusk's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3oogjdqv) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3oogjdqv/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/donaldtusk') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
venturaville/xlm-roberta-base-finetuned-panx-de
1f947e9bfdaac585f23d5e081a4ed7af79251b89
2022-07-25T15:02:55.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
venturaville
null
venturaville/xlm-roberta-base-finetuned-panx-de
2
null
transformers
26,533
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8632527372262775 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1367 - F1: 0.8633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2582 | 1.0 | 525 | 0.1653 | 0.8238 | | 0.1301 | 2.0 | 1050 | 0.1417 | 0.8439 | | 0.0841 | 3.0 | 1575 | 0.1367 | 0.8633 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_60_2
77968628649e69c2a486cbdee35e33a66b8a86c9
2022-07-06T00:26:19.000Z
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
BBarbarestani
null
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_60_2
2
null
transformers
26,534
Entry not found
elasticdotventures/distilbert-base-uncased-finetuned-squad
d319bde49ff764f17b9b84ebba31331f8649544c
2022-07-06T09:03:44.000Z
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
elasticdotventures
null
elasticdotventures/distilbert-base-uncased-finetuned-squad
2
null
transformers
26,535
Entry not found
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_70_2
90f05b8cdf7a0a316ccb95fe24db61b50878874a
2022-07-06T10:00:15.000Z
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
BBarbarestani
null
BBarbarestani/RoBERTa_HateXplain_Target_Span_Detection_UQS_Threshold_70_2
2
null
transformers
26,536
Entry not found
sumitrsch/xlm_R_large_multiconer22_hi
cc0ac072270957c21d125bd06838dad0c2171110
2022-07-06T12:26:37.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "license:afl-3.0", "autotrain_compatible" ]
token-classification
false
sumitrsch
null
sumitrsch/xlm_R_large_multiconer22_hi
2
null
transformers
26,537
--- license: afl-3.0 --- Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task. https://colab.research.google.com/drive/17WyqwdoRNnzImeik6wTRE5uuj9QQnkXA#scrollTo=nYtUtmyDFAqP
chiendvhust/distilbert-base-uncased-finetuned-squad
f841bbf809a731e0c347eabab214c9750afa22d4
2022-07-06T14:46:49.000Z
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
chiendvhust
null
chiendvhust/distilbert-base-uncased-finetuned-squad
2
null
transformers
26,538
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2747 | 1.0 | 5533 | 1.2178 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
sumitrsch/Indic-bert_multiconer22_bn
158aa18da626934027dddfd3b7a2e4d9056ab14f
2022-07-06T12:32:40.000Z
[ "pytorch", "albert", "token-classification", "transformers", "license:afl-3.0", "autotrain_compatible" ]
token-classification
false
sumitrsch
null
sumitrsch/Indic-bert_multiconer22_bn
2
1
transformers
26,539
--- license: afl-3.0 --- Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task for bangla track. https://colab.research.google.com/drive/1P9827acdS7i6eZTi4B0cOms5qLREqvUO
paola-md/recipe-test
ee53fe4b58ab1ec6f7ecce1b01f48ab3dd0f2456
2022-07-06T10:32:13.000Z
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
paola-md
null
paola-md/recipe-test
2
null
transformers
26,540
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: recipe-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3675 | 1.0 | 16 | 3.0009 | | 3.0062 | 2.0 | 32 | 2.9583 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
sumitrsch/xlm_R_large_multiconer22_bn
236307e19f49d16c233d3f0d5c3f6c47991b8e92
2022-07-06T12:32:05.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "license:afl-3.0", "autotrain_compatible" ]
token-classification
false
sumitrsch
null
sumitrsch/xlm_R_large_multiconer22_bn
2
1
transformers
26,541
--- license: afl-3.0 --- Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task for bangla track. https://colab.research.google.com/drive/1P9827acdS7i6eZTi4B0cOms5qLREqvUO
sumitrsch/mbert_multiconer22_hi
e7ff741cd2847ce73a773c27232e212237bfd25c
2022-07-06T12:25:50.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
sumitrsch
null
sumitrsch/mbert_multiconer22_hi
2
null
transformers
26,542
Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task. https://colab.research.google.com/drive/17WyqwdoRNnzImeik6wTRE5uuj9QQnkXA#scrollTo=nYtUtmyDFAqP
sumitrsch/mbert_multiconer22_bn
fa0e2d64571372017d49817b4001f72b4c158bc0
2022-07-06T12:30:50.000Z
[ "pytorch", "bert", "token-classification", "transformers", "license:afl-3.0", "autotrain_compatible" ]
token-classification
false
sumitrsch
null
sumitrsch/mbert_multiconer22_bn
2
1
transformers
26,543
--- license: afl-3.0 --- Put this model path in variable best_model_path in first cell of given colab notebook for testing semeval multiconer task for bangla track. https://colab.research.google.com/drive/1P9827acdS7i6eZTi4B0cOms5qLREqvUO
saekomdalkom/t5-small-finetuned-xsum
bff666bb7d70fb588bd0b38f187dce7f45efc799
2022-07-06T15:25:39.000Z
[ "pytorch", "t5", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
saekomdalkom
null
saekomdalkom/t5-small-finetuned-xsum
2
null
transformers
26,544
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 28.3577 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4783 - Rouge1: 28.3577 - Rouge2: 7.759 - Rougel: 22.274 - Rougelsum: 22.2869 - Gen Len: 18.8298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 2.7158 | 1.0 | 12753 | 2.4783 | 28.3577 | 7.759 | 22.274 | 22.2869 | 18.8298 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/zanza47
996c4fa9cdc1bd845c289000c4f826762e2bafbc
2022-07-06T16:45:17.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/zanza47
2
null
transformers
26,545
--- language: en thumbnail: http://www.huggingtweets.com/zanza47/1657125860989/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1312214716941393920/sX37K0us_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Detective Zanza (Commissions! 1/3 full)</div> <div style="text-align: center; font-size: 14px;">@zanza47</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Detective Zanza (Commissions! 1/3 full). | Data | Detective Zanza (Commissions! 1/3 full) | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 1157 | | Short tweets | 284 | | Tweets kept | 1801 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/383lput2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zanza47's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/dipzmx4r) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/dipzmx4r/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/zanza47') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ManqingLiu/xlm-roberta-base-finetuned-panx-de
5efb0f766a121f7d1e95dc69d813135143e65a7d
2022-07-06T18:16:00.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
ManqingLiu
null
ManqingLiu/xlm-roberta-base-finetuned-panx-de
2
null
transformers
26,546
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8627004891366169 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1363 - F1: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 | | 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 | | 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
huggingtweets/carterhiggins
d8482b97a6307c8f2eadc8c2c7a1c23dc85b240e
2022-07-07T01:27:42.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/carterhiggins
2
null
transformers
26,547
--- language: en thumbnail: http://www.huggingtweets.com/carterhiggins/1657157256503/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1296229510510030849/0dyqAcul_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Carter Higgins</div> <div style="text-align: center; font-size: 14px;">@carterhiggins</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Carter Higgins. | Data | Carter Higgins | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 538 | | Short tweets | 573 | | Tweets kept | 2136 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/302150se/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @carterhiggins's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38d6gnmr) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38d6gnmr/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/carterhiggins') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ChauNguyen23/distilbert-base-uncased-finetuned-imdb
66ff33ced6b085477555d578681685d2fa24214b
2022-07-07T02:54:46.000Z
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
ChauNguyen23
null
ChauNguyen23/distilbert-base-uncased-finetuned-imdb
2
null
transformers
26,548
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Vikasbhandari/TRY
cb5b03c982314b195cfceb8077895e7bf35e7b20
2022-07-07T12:17:31.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Vikasbhandari
null
Vikasbhandari/TRY
2
null
transformers
26,549
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: TRY results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TRY This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4234 - eval_wer: 0.3884 - eval_runtime: 51.9275 - eval_samples_per_second: 32.353 - eval_steps_per_second: 4.044 - epoch: 7.03 - step: 3500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
AdilOcd/t5large1
e5688a30a6b72dc26f54cff91067ada388d458c1
2022-07-08T01:40:30.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
AdilOcd
null
AdilOcd/t5large1
2
null
transformers
26,550
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5large1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5large1 This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Mascariddu8/distilbert-base-uncased-finetuned-imdb-accelerate
2bfe7b2d45cc94709bf0072d9a9ed8046e470d5e
2022-07-07T18:11:07.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Mascariddu8
null
Mascariddu8/distilbert-base-uncased-finetuned-imdb-accelerate
2
null
transformers
26,551
Entry not found
jonatasgrosman/exp_w2v2t_en_wav2vec2_s878
a7ed9807d8230457cbb16e690a06f269bf647e4a
2022-07-08T03:56:34.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_wav2vec2_s878
2
null
transformers
26,552
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_wav2vec2_s878 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_wav2vec2_s924
f113a0112a313ed0bda61b8bf7b08fbf8e5f74de
2022-07-08T04:12:02.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_wav2vec2_s924
2
null
transformers
26,553
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_wav2vec2_s924 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_wav2vec2_s203
531d2d8ccc28b1b469d5306f6bbe4ce487233b06
2022-07-08T04:24:19.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_wav2vec2_s203
2
null
transformers
26,554
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_wav2vec2_s203 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-100k_s807
5a33e0b54063c9e82a8d0b239d367624a4e41023
2022-07-08T04:33:29.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-100k_s807
2
null
transformers
26,555
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-100k_s807 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-100k_s421
d38a9e03f4843c1f105d1759341611d2549edc78
2022-07-08T04:43:53.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-100k_s421
2
null
transformers
26,556
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-100k_s421 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-100k_s364
61b7784f66d5ab805ed089fcfd1172ebdda19817
2022-07-08T04:56:51.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-100k_s364
2
null
transformers
26,557
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-100k_s364 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_xlsr-53_s870
40787267c1a087081439363d677c3dfbb1e91c96
2022-07-08T05:07:22.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_xlsr-53_s870
2
null
transformers
26,558
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_xlsr-53_s870 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_xlsr-53_s769
8e8fc38c913c10dfbc5ccd0fcb1fd319e96592d6
2022-07-08T05:19:10.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_xlsr-53_s769
2
null
transformers
26,559
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_xlsr-53_s769 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_xlsr-53_s279
4ffcb1c77f46b4962515a4ea915f01515872b5d5
2022-07-08T05:26:47.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_xlsr-53_s279
2
null
transformers
26,560
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_xlsr-53_s279 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech_s870
09712fb82027e2509a90ab355017fdff06bcee63
2022-07-08T05:31:32.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech_s870
2
null
transformers
26,561
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech_s870 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech_s227
9dc8c8b988725bfb863e93690281f5cf7c7e5daf
2022-07-08T05:36:00.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech_s227
2
null
transformers
26,562
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech_s227 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_hubert_s875
021cb5b660918c1abbf6c07953ae5636f4840760
2022-07-08T05:46:21.000Z
[ "pytorch", "hubert", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_hubert_s875
2
null
transformers
26,563
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_hubert_s875 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_hubert_s596
ad9e9a61307b59b2594ced028af40d0f2a91e2fa
2022-07-08T05:50:29.000Z
[ "pytorch", "hubert", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_hubert_s596
2
null
transformers
26,564
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_hubert_s596 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_hubert_s877
ed23e3d9d2ebcc3f31376a56d4cd988681a42b19
2022-07-08T05:55:00.000Z
[ "pytorch", "hubert", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_hubert_s877
2
null
transformers
26,565
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_hubert_s877 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-sv_s320
acfcf901ef2fc377396de7f1106ef6b0e89171e0
2022-07-08T06:07:23.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-sv_s320
2
null
transformers
26,566
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-sv_s320 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-sv_s438
4c443170949abf4295991c738aae504e75a24156
2022-07-08T06:11:38.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-sv_s438
2
null
transformers
26,567
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-sv_s438 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_no-pretraining_s883
e6a51c56712629eef5441ba00595803af69fb645
2022-07-08T06:16:41.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_no-pretraining_s883
2
null
transformers
26,568
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_no-pretraining_s883 Fine-tuned randomly initialized wav2vec2 model for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_no-pretraining_s289
d3846176b90254995c1961f5080b62ce9e82b4af
2022-07-08T06:21:53.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_no-pretraining_s289
2
null
transformers
26,569
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_no-pretraining_s289 Fine-tuned randomly initialized wav2vec2 model for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_no-pretraining_s852
330ebe54b4d2219cffbdc64e023054867aedbcea
2022-07-08T06:27:19.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_no-pretraining_s852
2
null
transformers
26,570
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_no-pretraining_s852 Fine-tuned randomly initialized wav2vec2 model for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_wavlm_s767
ad9bf701ee4281ce2e7620a87883a704be067aa7
2022-07-08T06:33:36.000Z
[ "pytorch", "wavlm", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_wavlm_s767
2
null
transformers
26,571
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_wavlm_s767 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_wavlm_s461
d0af76c9b7d070e03d8a91bc01d859cc2a7cc396
2022-07-08T06:40:13.000Z
[ "pytorch", "wavlm", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_wavlm_s461
2
null
transformers
26,572
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_wavlm_s461 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_wavlm_s990
aa6179e21c769207312e6a580df247168812359f
2022-07-08T06:48:30.000Z
[ "pytorch", "wavlm", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_wavlm_s990
2
null
transformers
26,573
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_wavlm_s990 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech-ml_s377
d31fb03caec3116a935662bbc6eebf0a2d5fc30e
2022-07-08T06:52:52.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech-ml_s377
2
null
transformers
26,574
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech-ml_s377 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech-ml_s103
b53da00aa09e9116c344e5e909985134d08d9edd
2022-07-08T06:58:31.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech-ml_s103
2
null
transformers
26,575
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech-ml_s103 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech-ml_s756
dcd90e7b4eb1d1423e38f8ad80b9b9aaa86dce8c
2022-07-08T07:05:35.000Z
[ "pytorch", "unispeech", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech-ml_s756
2
null
transformers
26,576
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech-ml_s756 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-fr_s118
9e94ec787bad8d3ba30b94772a59507009b73705
2022-07-08T07:12:26.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-fr_s118
2
null
transformers
26,577
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-fr_s118 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-fr_s691
f5020321c6a1374f4143585421755d3f4f4849dc
2022-07-08T07:20:48.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-fr_s691
2
null
transformers
26,578
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-fr_s691 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-fr_s51
e62de7a25db301c26d864b4524ad097a8baacee4
2022-07-08T07:29:19.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-fr_s51
2
null
transformers
26,579
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-fr_s51 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-es_s952
3e265f0be47e0c9da045894d0f6050bd62cdbe8b
2022-07-08T07:36:55.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-es_s952
2
null
transformers
26,580
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-es_s952 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-es_s474
13a2edc8868934a8567416e445ef6b06b267faca
2022-07-08T07:45:27.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-es_s474
2
null
transformers
26,581
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-es_s474 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-es_s186
4893feae151ffc6fd7ebcf932b9418fd27116652
2022-07-08T07:54:17.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-es_s186
2
null
transformers
26,582
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-es_s186 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-nl_s169
ad418f92caabd0fa017e38f1df656840218877c4
2022-07-08T08:00:33.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-nl_s169
2
null
transformers
26,583
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-nl_s169 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-nl_s281
7255060ca0c6b16721b4197af9fb048b56868c7a
2022-07-08T08:09:32.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-nl_s281
2
null
transformers
26,584
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-nl_s281 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-nl_s980
daf48cad71373eebe553eb3bd3b2126c9cccc7e8
2022-07-08T08:17:30.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-nl_s980
2
null
transformers
26,585
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-nl_s980 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s456
5b4cf70e56aa20393744f4d20acf4c1e156f6d3d
2022-07-08T08:26:50.000Z
[ "pytorch", "unispeech-sat", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s456
2
null
transformers
26,586
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech-sat_s456 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s251
d636ec9a5e6dd256552ff0adeb32c7b120d0a1c7
2022-07-08T08:36:54.000Z
[ "pytorch", "unispeech-sat", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s251
2
null
transformers
26,587
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech-sat_s251 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s459
b49cd8731a1e61570a92f4527f46e9a71f44d913
2022-07-08T08:46:57.000Z
[ "pytorch", "unispeech-sat", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_unispeech-sat_s459
2
null
transformers
26,588
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_unispeech-sat_s459 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_xls-r_s957
b90af1deb90bd3f6e6516853b9c902f53429400c
2022-07-08T08:54:52.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_xls-r_s957
2
null
transformers
26,589
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_xls-r_s957 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_xls-r_s732
2e67c873c0827bbcc42b18e1aae6807fbae49325
2022-07-08T09:02:46.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_xls-r_s732
2
null
transformers
26,590
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_xls-r_s732 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_xls-r_s468
8fba84c5fa5e46eec3670a12535b6e53622133f8
2022-07-08T09:10:45.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_xls-r_s468
2
null
transformers
26,591
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_xls-r_s468 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s863
e730eca18440657cc27198def533b369154ef79d
2022-07-08T09:19:20.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s863
2
null
transformers
26,592
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_r-wav2vec2_s863 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s93
fb6aeecc9ce6fd1a3a8bf9855eede7e0ae7779ea
2022-07-08T09:28:53.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s93
2
null
transformers
26,593
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_r-wav2vec2_s93 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
huggingtweets/markzero
ae4e371e25fed40b67f271447955a5e327291e70
2022-07-08T09:34:56.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/markzero
2
null
transformers
26,594
--- language: en thumbnail: http://www.huggingtweets.com/markzero/1657272867878/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1540882647232266249/rccHZ22G_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">πŸ€– AI BOT πŸ€–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">mark zero dot earth</div> <div style="text-align: center; font-size: 14px;">@markzero</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from mark zero dot earth. | Data | mark zero dot earth | | --- | --- | | Tweets downloaded | 3206 | | Retweets | 1045 | | Short tweets | 155 | | Tweets kept | 2006 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28cw7iz6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @markzero's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ekslgmqq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ekslgmqq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/markzero') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s44
cc187fb3b22a4bb1c11af4a454ed4f025837bc6d
2022-07-08T09:36:19.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_r-wav2vec2_s44
2
null
transformers
26,595
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_r-wav2vec2_s44 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-it_s859
07502747e4b6998699976e6f2e9c18e06c878bea
2022-07-08T09:52:16.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-it_s859
2
null
transformers
26,596
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-it_s859 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-it_s515
4c700e6e8f957a2f4ed86ed9d4985c38de99e8c8
2022-07-08T09:58:38.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-it_s515
2
null
transformers
26,597
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-it_s515 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_en_vp-it_s250
be541291b396cc16b6ed65eb6a5e4dcf767aa282
2022-07-08T10:03:26.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_en_vp-it_s250
2
null
transformers
26,598
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_en_vp-it_s250 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_th_wav2vec2_s729
4cfc3427ef2af8f4b764507319655f79e0c52747
2022-07-08T10:11:02.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "th", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_th_wav2vec2_s729
2
null
transformers
26,599
--- language: - th license: apache-2.0 tags: - automatic-speech-recognition - th datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_th_wav2vec2_s729 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.