modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
plbr/my_model
c55378e42af34fb39b82969d0ef7a248e10e9c42
2022-04-14T05:22:56.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
false
plbr
null
plbr/my_model
3
null
transformers
22,200
Entry not found
obokkkk/koelectra-base-v3-discriminator-finetuned-klue-v4
1d2d62641ec7998be9ca27778daa39b55c17efdc
2022-04-14T04:32:20.000Z
[ "pytorch", "electra", "question-answering", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
obokkkk
null
obokkkk/koelectra-base-v3-discriminator-finetuned-klue-v4
3
null
transformers
22,201
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: koelectra-base-v3-discriminator-finetuned-klue-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-base-v3-discriminator-finetuned-klue-v4 This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.4979 | 0.33 | 500 | 4.0470 | | 3.2001 | 0.65 | 1000 | 2.3172 | | 2.215 | 0.98 | 1500 | 1.9043 | | 1.7849 | 1.31 | 2000 | 1.7181 | | 1.6156 | 1.63 | 2500 | 1.5955 | | 1.5295 | 1.96 | 3000 | 1.5071 | | 1.2147 | 2.29 | 3500 | 1.5872 | | 1.1727 | 2.61 | 4000 | 1.5104 | | 1.1467 | 2.94 | 4500 | 1.6059 | | 0.9972 | 3.27 | 5000 | 1.6523 | | 0.9791 | 3.59 | 5500 | 1.6219 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.12.1
huggingtweets/credenzaclear2-dril-nia_mp4
75193fcddcc4e0fa8a1049ccd016fdfeb536edca
2022-04-14T04:40:26.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/credenzaclear2-dril-nia_mp4
3
null
transformers
22,202
--- language: en thumbnail: http://www.huggingtweets.com/credenzaclear2-dril-nia_mp4/1649911222622/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1487740104340918272/7c9spp2E_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1511875789213638656/WdSSvAhj_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Nia & Audrey Horne</div> <div style="text-align: center; font-size: 14px;">@credenzaclear2-dril-nia_mp4</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Nia & Audrey Horne. | Data | wint | Nia | Audrey Horne | | --- | --- | --- | --- | | Tweets downloaded | 3229 | 1552 | 626 | | Retweets | 477 | 28 | 74 | | Short tweets | 303 | 133 | 124 | | Tweets kept | 2449 | 1391 | 428 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rarj99g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @credenzaclear2-dril-nia_mp4's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20c2vigo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20c2vigo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/credenzaclear2-dril-nia_mp4') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
andidu/aaabbbcccddd
56abf6d48254723c7958bfd8c80c7f00e7164895
2022-05-20T17:56:34.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
andidu
null
andidu/aaabbbcccddd
3
null
transformers
22,203
Entry not found
ndavid/autotrain-trec-fine-bert-739422530
3956a431b56005206095aadad0cba790a9bee183
2022-04-14T09:39:42.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:ndavid/autotrain-data-trec-fine-bert", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
ndavid
null
ndavid/autotrain-trec-fine-bert-739422530
3
null
transformers
22,204
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - ndavid/autotrain-data-trec-fine-bert co2_eq_emissions: 0.02238820299105448 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 739422530 - CO2 Emissions (in grams): 0.02238820299105448 ## Validation Metrics - Loss: 0.36623290181159973 - Accuracy: 0.9321753515301903 - Macro F1: 0.9066706944656866 - Micro F1: 0.9321753515301903 - Weighted F1: 0.9314858667247282 - Macro Precision: 0.9489233194839841 - Micro Precision: 0.9321753515301903 - Weighted Precision: 0.9347346558570125 - Macro Recall: 0.8842587178845419 - Micro Recall: 0.9321753515301903 - Weighted Recall: 0.9321753515301903 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ndavid/autotrain-trec-fine-bert-739422530 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ndavid/autotrain-trec-fine-bert-739422530", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ndavid/autotrain-trec-fine-bert-739422530", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ning-fish/xlm-roberta-base-finetuned-panx-de
964bf3bd3c72ef0535ec77cfd5857b6cfe9d9782
2022-04-14T15:17:38.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Ning-fish
null
Ning-fish/xlm-roberta-base-finetuned-panx-de
3
null
transformers
22,205
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8591260810195721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.257 | 1.0 | 525 | 0.1512 | 0.8302 | | 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 | | 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-xlnetBaseCased-bertTokenizer-12April2022
3878f663db355e8c6ea329b79f134bbf9e52e4df
2022-04-14T17:16:01.000Z
[ "pytorch", "tensorboard", "xlnet", "question-answering", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
question-answering
false
nntadotzip
null
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-xlnetBaseCased-bertTokenizer-12April2022
3
null
transformers
22,206
--- license: mit tags: - generated_from_trainer model-index: - name: xlnet-base-cased-IUChatbot-ontologyDts-xlnetBaseCased-bertTokenizer-12April2022 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-IUChatbot-ontologyDts-xlnetBaseCased-bertTokenizer-12April2022 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 357 | 0.6451 | | 0.8416 | 2.0 | 714 | 0.4428 | | 0.5227 | 3.0 | 1071 | 0.4240 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
hxz116/distilbert-base-uncased-finetuned-cola
e582dd58de7486c6e21e35edb624cd1057d50a16
2022-04-14T19:55:05.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
hxz116
null
hxz116/distilbert-base-uncased-finetuned-cola
3
null
transformers
22,207
Entry not found
SophieTr/PPO-policy_v2
24cb8b30547567b8eb41da421b049229578bf3be
2022-04-19T01:42:45.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
SophieTr
null
SophieTr/PPO-policy_v2
3
null
transformers
22,208
Entry not found
NeuralNotwork/blenderbot-400M-baseline
323e89ee1123c3a59c4743c29bee5b0bf45aa711
2022-04-15T05:45:35.000Z
[ "pytorch", "blenderbot", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
NeuralNotwork
null
NeuralNotwork/blenderbot-400M-baseline
3
null
transformers
22,209
Entry not found
NeuralNotwork/blenderbot-400M-ul-ts
5ce885ceb53bd545d1899b5b78ff4594f4ee4dac
2022-04-15T09:07:50.000Z
[ "pytorch", "blenderbot", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
NeuralNotwork
null
NeuralNotwork/blenderbot-400M-ul-ts
3
null
transformers
22,210
Entry not found
SophieTr/PPO-policy_v3
6a64ac236281c8f1e86f93d527196828ae3b6431
2022-04-22T14:28:39.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
SophieTr
null
SophieTr/PPO-policy_v3
3
null
transformers
22,211
Entry not found
birgermoell/psst-fairseq-larger-rir
0a470fc31c1ab22a6c3314aa72b8d38b61e593e9
2022-04-15T13:59:09.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "en", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
birgermoell
null
birgermoell/psst-fairseq-larger-rir
3
null
transformers
22,212
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition --- This model is trained on the PSST Challenge data, with a subset of TIMIT that was augmented using Room Impulse Response (RIR). A file containing the list of TIMIT IDs is in the repository (`timit-ids.txt`) The model was finetuned on [Wav2vec 2.0 Large, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec), and the results on the validation set were **PER:** 21\.0%, **FER:** 9\.2%.
birgermoell/psst-fairseq-pitch-shift-timit
ddbb2fd8c364520ab82eec898b8f71c170f65b57
2022-04-15T13:38:14.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
birgermoell
null
birgermoell/psst-fairseq-pitch-shift-timit
3
null
transformers
22,213
Entry not found
aseifert/comma-xlm-roberta-large
a1d90c0277c63314d8fdd6fed39b6aef38ef05a6
2022-04-16T08:49:09.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
aseifert
null
aseifert/comma-xlm-roberta-large
3
null
transformers
22,214
Entry not found
dennishe97/longformer-code-relatedness-model
886e804d91ddeda293a198f1b43f6960fae77f0f
2022-04-16T05:34:04.000Z
[ "pytorch", "longformer", "feature-extraction", "transformers" ]
feature-extraction
false
dennishe97
null
dennishe97/longformer-code-relatedness-model
3
null
transformers
22,215
Entry not found
jason9693/KcELECTRA-base-apeach
9b0456868e57315cd960ae0bbdbfee88cccdfc8c
2022-04-16T14:20:19.000Z
[ "pytorch", "electra", "text-classification", "ko", "dataset:jason9693/APEACH", "transformers" ]
text-classification
false
jason9693
null
jason9693/KcELECTRA-base-apeach
3
null
transformers
22,216
--- language: ko widget: - text: "응 어쩔티비~~" datasets: - jason9693/APEACH ---
V3RX2000/distilbert-base-uncased-finetuned-imdb-accelerate
b1be83979e7faba4a5b965259add189b5e7fc314
2022-04-16T06:51:29.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
V3RX2000
null
V3RX2000/distilbert-base-uncased-finetuned-imdb-accelerate
3
null
transformers
22,217
Entry not found
Pavithra/madgrad-best-version
d72f4bfc09eccc4774b988008e4e81d7dd7ccc1a
2022-04-18T01:34:31.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
Pavithra
null
Pavithra/madgrad-best-version
3
null
transformers
22,218
Entry not found
adnankhawaja/R_T_FB_LM
f6f0065eefe573ca50d8d7c816f579c5c8de2798
2022-04-17T08:06:06.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
adnankhawaja
null
adnankhawaja/R_T_FB_LM
3
null
transformers
22,219
Entry not found
rmihaylov/roberta-base-use-qa-theseus-bg
fd4d6b48338499e0462b8921bb936eabfe103be8
2022-04-18T10:25:59.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "bg", "dataset:oscar", "dataset:chitanka", "dataset:wikipedia", "arxiv:2004.09813", "arxiv:2002.02925", "transformers", "torch", "license:mit", "sentence-similarity" ]
sentence-similarity
false
rmihaylov
null
rmihaylov/roberta-base-use-qa-theseus-bg
3
null
transformers
22,220
--- inference: false pipeline_tag: sentence-similarity language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # ROBERTA BASE (cased) trained on private Bulgarian-English parallel data This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences. Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. The teacher model is the [USE model by Google](https://aclanthology.org/D18-2029/). This model is cased: it does make a difference between bulgarian and Bulgarian. It was trained on private Bulgarian-English parallel data. Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925). ### How to use Here is how to use this model in PyTorch: ```python >>> import scipy >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> >>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-use-qa-theseus-bg') >>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-use-qa-theseus-bg') >>> >>> query = "Какви са съставките на бисквитките?" >>> >>> answers = [ >>> "Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.", >>> "Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.", >>> "В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат ​​бисквити.", >>> "Бисквитите Chewier понякога се наричат ​​бисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.", >>> "Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.", >>> "Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.", >>> "Бисквитките често се сервират с напитки като мляко, кафе или чай.", >>> "Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.", >>> "Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.", >>> "Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).", >>> "Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.", >>> "Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.", >>> ] >>> >>> query_embedding = model.question(**tokenizer.encode_plus(query, return_tensors='pt')).detach().numpy()[0] >>> >>> corpus, corpus_embeddings = [], [] >>> for answer in answers: >>> value_inputs = tokenizer.encode_plus(answer, answer, return_tensors='pt') >>> embedding = model.answer(**value_inputs).detach().numpy()[0] >>> corpus.append(answer) >>> corpus_embeddings.append(embedding) >>> >>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0] >>> >>> results = zip(range(len(distances)), distances) >>> results = sorted(results, key=lambda x: x[1]) >>> >>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results]) [['Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.', 0.5449754306536151], ['Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.', 0.5049509545814316], ['В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат \u200b\u200bбисквити.', 0.5029661338050297], ['Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.', 0.4991678233218718], ['Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.', 0.49050297326146386], ['Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.', 0.48950875441294106], ['Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.', 0.48646309549536737], ['Бисквитите Chewier понякога се наричат \u200b\u200bбисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.', 0.4840599482604815], ['Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).', 0.45209677893728206], ['Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.', 0.4511516464302119], ['Бисквитките често се сервират с напитки като мляко, кафе или чай.', 0.42364528401677803], ['Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.', 0.3267314582662877]] ```
zoha/wav2vec2-base-timit-demo-colab
c740187a3550aac39028a335ebbf86f83b86b959
2022-04-18T16:40:09.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
zoha
null
zoha/wav2vec2-base-timit-demo-colab
3
null
transformers
22,221
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
nbhimte/tiny-bert-best
8c22e512e72c0ed78354fadaeac59be15a8b73e2
2022-04-18T11:46:11.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
nbhimte
null
nbhimte/tiny-bert-best
3
null
transformers
22,222
TrainOutput(global_step=2456, training_loss=0.29150783277878156, metrics={'train_runtime': 939.2154, 'train_samples_per_second': 167.246, 'train_steps_per_second': 2.615, 'total_flos': 321916620637920.0, 'train_loss': 0.29150783277878156, 'epoch': 4.0})
Auruncus/gpt-j-6b-8bit-FT
79930960a42b221c458740ddcb124af9d2686f33
2022-04-18T20:17:14.000Z
[ "pytorch", "gptj", "text-generation", "transformers" ]
text-generation
false
Auruncus
null
Auruncus/gpt-j-6b-8bit-FT
3
null
transformers
22,223
Entry not found
jenspt/bert_classification
aed660be8ba2f5dce313114ad674753d4007704f
2022-04-19T10:39:10.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
jenspt
null
jenspt/bert_classification
3
null
transformers
22,224
Entry not found
frozenwalker/SciFive_pubmedqa_question_generation_nmconcept
ab0040a0b456a761efa87439d3313ecddc1cb087
2022-04-19T10:54:11.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
frozenwalker
null
frozenwalker/SciFive_pubmedqa_question_generation_nmconcept
3
null
transformers
22,225
Entry not found
tuhailong/bi_encoder_roberta-wwm-ext
8ada21c915945bfd3060367ad751a28b1d826d55
2022-04-20T02:45:22.000Z
[ "pytorch", "bert", "feature-extraction", "zh", "dataset:dialogue", "transformers", "sbert" ]
feature-extraction
false
tuhailong
null
tuhailong/bi_encoder_roberta-wwm-ext
3
null
transformers
22,226
--- language: zh tags: - sbert datasets: - dialogue --- # Data train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs. ## Model model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is bi-encoder ### Usage ```python >>> from sentence_transformers import SentenceTransformer, util >>> model = SentenceTransformer("tuhailong/bi_encoder_roberta-wwm-ext", device="cuda:1") >>> model.max_seq_length=32 >>> sentences = ["今天天气不错", "今天心情不错"] >>> embeddings1 = model.encode([sentences[0]], convert_to_tensor=True) >>> embeddings2 = model.encode([sentences[1]], convert_to_tensor=True) >>> scores = util.cos_sim(embeddings1, embeddings2).cpu().numpy() >>> print(scores) ``` #### Code train code from https://github.com/TTurn/bi-encoder ##### PS Because add the pooling layer and dense layer after model,has folders in model files. So here will be additional files "1_Pooling-config.json", "2_Dense-config.json" and "2_Dense-pytorch_model.bin". after download these files, rename them as "1_Pooling/config.json", "2_Dense/config.json" and "2_Dense/pytorch_model.bin".
anshr/t5-small_supervised_baseline_01
15cc9098acb754745c5066bce957904959cb8d33
2022-04-19T15:15:50.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
anshr
null
anshr/t5-small_supervised_baseline_01
3
null
transformers
22,227
Entry not found
GPL/webis-touche2020-msmarco-distilbert-gpl
3b9885ddaffcedfda10dfef9c536638dc945c244
2022-04-19T15:16:14.000Z
[ "pytorch", "distilbert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
GPL
null
GPL/webis-touche2020-msmarco-distilbert-gpl
3
null
sentence-transformers
22,228
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/hotpotqa-tsdae-msmarco-distilbert-gpl
638329fe91a9675c8b8826f863606490dec7870e
2022-04-19T15:24:12.000Z
[ "pytorch", "distilbert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
GPL
null
GPL/hotpotqa-tsdae-msmarco-distilbert-gpl
3
null
sentence-transformers
22,229
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/trec-news-tsdae-msmarco-distilbert-gpl
fdc9e5b5339e09bb3ccf26390eb5cd019dcf3d5b
2022-04-19T15:26:45.000Z
[ "pytorch", "distilbert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
GPL
null
GPL/trec-news-tsdae-msmarco-distilbert-gpl
3
null
sentence-transformers
22,230
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/fever-tsdae-msmarco-distilbert-margin-mse
ee01372c4e14d3c91273885f7d67c5a1a7eb5e3a
2022-04-19T16:43:37.000Z
[ "pytorch", "distilbert", "feature-extraction", "transformers" ]
feature-extraction
false
GPL
null
GPL/fever-tsdae-msmarco-distilbert-margin-mse
3
null
transformers
22,231
Entry not found
GPL/hotpotqa-tsdae-msmarco-distilbert-margin-mse
50afdbd3640b1a3a699db7f699a40ff117e6c228
2022-04-19T16:44:10.000Z
[ "pytorch", "distilbert", "feature-extraction", "transformers" ]
feature-extraction
false
GPL
null
GPL/hotpotqa-tsdae-msmarco-distilbert-margin-mse
3
null
transformers
22,232
Entry not found
GPL/webis-touche2020-tsdae-msmarco-distilbert-margin-mse
f7b6241f93da7f17c7caac2649635e202dc0b7de
2022-04-19T16:46:45.000Z
[ "pytorch", "distilbert", "feature-extraction", "transformers" ]
feature-extraction
false
GPL
null
GPL/webis-touche2020-tsdae-msmarco-distilbert-margin-mse
3
null
transformers
22,233
Entry not found
celinelee/bart-finetuned-conala-3
3e237f46f1ce854df7670c6924fe4ec70010cb47
2022-04-20T15:10:58.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
celinelee
null
celinelee/bart-finetuned-conala-3
3
1
transformers
22,234
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge - bleu model-index: - name: bart-finetuned-conala-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuned-conala-3 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an CoNaLa. It achieves the following results on the evaluation set: - Loss: 1.8253 - Rouge1: 47.4345 - Rouge2: 23.8936 - Rougel: 45.317 - Rougelsum: 45.4339 - Bleu: 0.0657 - Gen Len: 58.0 ## Model description More information needed ## Intended uses & limitations Code snippet -> NL intent ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:| | No log | 0.08 | 50 | 2.7823 | 35.8458 | 12.1898 | 33.7466 | 33.8377 | 0.0041 | 58.0 | | No log | 0.17 | 100 | 2.4223 | 37.2633 | 13.429 | 34.4943 | 34.5533 | 0.0087 | 58.0 | | No log | 0.25 | 150 | 2.2696 | 40.6963 | 16.5785 | 38.1213 | 38.16 | 0.0167 | 58.0 | | No log | 0.34 | 200 | 2.3168 | 41.3324 | 17.292 | 39.0117 | 39.113 | 0.0173 | 58.0 | | No log | 0.42 | 250 | 2.3187 | 41.1345 | 16.6829 | 38.8514 | 38.891 | 0.0237 | 58.0 | | No log | 0.5 | 300 | 2.1701 | 41.0145 | 17.5601 | 39.166 | 39.249 | 0.0206 | 58.0 | | No log | 0.59 | 350 | 2.2035 | 41.7506 | 17.7251 | 39.4856 | 39.5647 | 0.0292 | 58.0 | | No log | 0.67 | 400 | 2.1006 | 43.0324 | 19.9801 | 40.8704 | 40.9399 | 0.0319 | 58.0 | | No log | 0.76 | 450 | 2.0563 | 43.2151 | 18.7409 | 40.4183 | 40.502 | 0.0244 | 58.0 | | 2.4902 | 0.84 | 500 | 2.0468 | 43.2215 | 18.3484 | 40.9498 | 41.0682 | 0.0317 | 58.0 | | 2.4902 | 0.92 | 550 | 2.0222 | 44.9934 | 19.8389 | 42.4478 | 42.5687 | 0.0372 | 58.0 | | 2.4902 | 1.01 | 600 | 2.1095 | 43.8293 | 19.5682 | 40.882 | 40.9518 | 0.0311 | 58.0 | | 2.4902 | 1.09 | 650 | 2.0124 | 43.6928 | 19.6878 | 39.6602 | 39.7368 | 0.0417 | 58.0 | | 2.4902 | 1.18 | 700 | 2.0027 | 46.2115 | 21.9475 | 43.5869 | 43.6713 | 0.0477 | 58.0 | | 2.4902 | 1.26 | 750 | 1.9599 | 45.9388 | 22.0368 | 43.4731 | 43.5656 | 0.043 | 58.0 | | 2.4902 | 1.34 | 800 | 1.9467 | 44.7518 | 20.4755 | 42.489 | 42.6274 | 0.0394 | 58.0 | | 2.4902 | 1.43 | 850 | 1.9643 | 44.1584 | 20.8833 | 41.8848 | 41.9733 | 0.0441 | 58.0 | | 2.4902 | 1.51 | 900 | 1.8926 | 47.3789 | 22.9104 | 45.0164 | 45.0822 | 0.0445 | 58.0 | | 2.4902 | 1.6 | 950 | 1.8855 | 46.8329 | 22.1133 | 44.1788 | 44.2666 | 0.0431 | 58.0 | | 1.8023 | 1.68 | 1000 | 1.9160 | 47.1319 | 22.9792 | 44.4807 | 44.6103 | 0.0475 | 58.0 | | 1.8023 | 1.76 | 1050 | 1.8498 | 48.8005 | 24.4785 | 46.4564 | 46.5427 | 0.0576 | 58.0 | | 1.8023 | 1.85 | 1100 | 1.8611 | 47.8327 | 23.2086 | 45.5999 | 45.6868 | 0.0487 | 58.0 | | 1.8023 | 1.93 | 1150 | 1.8497 | 47.7267 | 23.2021 | 45.5104 | 45.546 | 0.0512 | 58.0 | | 1.8023 | 2.02 | 1200 | 1.8335 | 47.1502 | 22.8336 | 44.7614 | 44.7927 | 0.0566 | 58.0 | | 1.8023 | 2.1 | 1250 | 1.8779 | 46.6645 | 22.9162 | 44.0086 | 44.2021 | 0.0539 | 58.0 | | 1.8023 | 2.18 | 1300 | 1.8514 | 48.1544 | 24.7977 | 45.949 | 46.0254 | 0.0719 | 58.0 | | 1.8023 | 2.27 | 1350 | 1.8658 | 46.7655 | 23.4813 | 44.5872 | 44.6907 | 0.069 | 58.0 | | 1.8023 | 2.35 | 1400 | 1.8400 | 46.2749 | 23.6528 | 44.3149 | 44.4056 | 0.0572 | 58.0 | | 1.8023 | 2.44 | 1450 | 1.8343 | 46.6169 | 23.8005 | 44.5486 | 44.6125 | 0.0547 | 58.0 | | 1.3851 | 2.52 | 1500 | 1.8220 | 47.4739 | 24.3457 | 45.4959 | 45.6216 | 0.0662 | 58.0 | | 1.3851 | 2.61 | 1550 | 1.8333 | 47.6311 | 24.3616 | 45.5904 | 45.6146 | 0.0666 | 58.0 | | 1.3851 | 2.69 | 1600 | 1.8091 | 47.4633 | 24.0785 | 45.2493 | 45.2845 | 0.0645 | 58.0 | | 1.3851 | 2.77 | 1650 | 1.8085 | 47.6495 | 23.8386 | 45.5077 | 45.5848 | 0.0639 | 58.0 | | 1.3851 | 2.86 | 1700 | 1.8377 | 46.9721 | 23.4325 | 44.8386 | 44.9003 | 0.0647 | 58.0 | | 1.3851 | 2.94 | 1750 | 1.8238 | 47.5266 | 23.9843 | 45.3897 | 45.473 | 0.0653 | 58.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 2.1.0 - Tokenizers 0.10.3
omar47/wav2vec2-large-xls-r-300m-urdu-cv8-200epochs
2e040ad35bfa2e6063a5a4bd869b7d9cd8d3921b
2022-04-21T05:43:51.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "model-index" ]
automatic-speech-recognition
false
omar47
null
omar47/wav2vec2-large-xls-r-300m-urdu-cv8-200epochs
3
null
transformers
22,235
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-urdu-cv8-200epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-urdu-cv8-200epochs This model was trained from scratch on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.3200 - Wer: 0.7723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 13 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3204 | 1.27 | 32 | 1.3200 | 0.7723 | | 0.3021 | 2.55 | 64 | 1.3200 | 0.7723 | | 0.3153 | 3.82 | 96 | 1.3200 | 0.7723 | | 0.3239 | 5.12 | 128 | 1.3200 | 0.7723 | | 0.3153 | 6.39 | 160 | 1.3200 | 0.7723 | | 0.3202 | 7.67 | 192 | 1.3200 | 0.7723 | | 0.3126 | 8.94 | 224 | 1.3200 | 0.7723 | | 0.3183 | 10.24 | 256 | 1.3200 | 0.7723 | | 0.3135 | 11.51 | 288 | 1.3200 | 0.7723 | | 0.3137 | 12.78 | 320 | 1.3200 | 0.7723 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.12.1
birgermoell/wav2vec2-common_voice-lithuanian
56ea298da3aee6555f11e072cf60e0fe986d1811
2022-04-20T08:38:35.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "lt", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
birgermoell
null
birgermoell/wav2vec2-common_voice-lithuanian
3
null
transformers
22,236
--- language: - lt license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-lithuanian results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-lithuanian This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - LT dataset. It achieves the following results on the evaluation set: - Loss: 0.5988 - Wer: 0.6546 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 2.7 | 100 | 3.5524 | 1.0 | | No log | 5.41 | 200 | 3.0275 | 1.0 | | No log | 8.11 | 300 | 1.8796 | 1.0003 | | No log | 10.81 | 400 | 0.6796 | 0.7686 | | 3.3102 | 13.51 | 500 | 0.6373 | 0.7297 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
mwong/roberta-base-fever-evidence-related
e29e51200474b100a89df9d166366eec097f7932
2022-06-24T03:33:25.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:mwong/fever-evidence-related", "transformers", "text classification", "fact checking", "license:mit" ]
text-classification
false
mwong
null
mwong/roberta-base-fever-evidence-related
3
1
transformers
22,237
--- language: en license: mit tags: - text classification - fact checking datasets: - mwong/fever-evidence-related widget: - text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located." example_title: "Evidence related to claim" metrics: f1 --- # FeverRoberta FeverRoberta is a classifier model that predicts if evidence is related to query claim. The model achieved F1 score of 92.67% with test dataset "mwong/fever-evidence-related". Using pretrained roberta-base model, the classifier head is trained on Fever dataset.
mwong/albert-base-climate-claim-related
575eff2235e0f2de7532b412457c46c105a4dada
2022-06-24T03:35:34.000Z
[ "pytorch", "albert", "text-classification", "en", "dataset:mwong/fever-claim-related", "dataset:mwong/climate-claim-related", "transformers", "text classification", "fact checking", "license:mit" ]
text-classification
false
mwong
null
mwong/albert-base-climate-claim-related
3
1
transformers
22,238
--- language: en license: mit tags: - text classification - fact checking datasets: - mwong/fever-claim-related - mwong/climate-claim-related widget: - text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located." example_title: "Evidence related to claim" metrics: f1 --- # ClimateAlbert ClimateAlbert is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 85.33% with test dataset "mwong/climate-claim-related". Using pretrained albert-base-v2 model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
mwong/climatebert-base-f-climate-evidence-related
86db8ab1ea7f751178607a4e99d8263f9093f318
2022-06-24T03:32:39.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:mwong/fever-evidence-related", "dataset:mwong/climate-evidence-related", "transformers", "text classification", "fact checking", "license:mit" ]
text-classification
false
mwong
null
mwong/climatebert-base-f-climate-evidence-related
3
1
transformers
22,239
--- language: en license: mit tags: - text classification - fact checking datasets: - mwong/fever-evidence-related - mwong/climate-evidence-related widget: - text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located." example_title: "Evidence related to claim" metrics: f1 --- # ClimateBert-related ClimateBert-related is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 81.90% with test dataset "mwong/climate-evidence-related". Using pretrained ClimateBert-f model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
orendar/light_generator
aef79913bf8a358114d1fa6f6015806da3254726
2022-04-20T16:35:27.000Z
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
false
orendar
null
orendar/light_generator
3
null
transformers
22,240
Entry not found
FrozenWolf/dummy-model
52afddf333219d6c587606d952cd1f09b57fbe33
2022-04-20T18:05:28.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
FrozenWolf
null
FrozenWolf/dummy-model
3
null
transformers
22,241
Entry not found
BigSalmon/InformalToFormalLincoln39
d40444abcc06a601d3308f56af200b7b62d5226b
2022-04-24T15:00:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/InformalToFormalLincoln39
3
null
transformers
22,242
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln39") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln39") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ```
supriyaraj47/deberta-base-nli
56d45cc173ac1f5fb68da6d1b5cc7174d22012ca
2022-04-20T21:33:22.000Z
[ "pytorch", "deberta-v2", "text-classification", "transformers" ]
text-classification
false
supriyaraj47
null
supriyaraj47/deberta-base-nli
3
null
transformers
22,243
Entry not found
Goud/DziriBERT-summarization-goud
f423211db351cf22cffa3d7f6988df8d04e2f4c7
2022-04-29T15:06:30.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "Moroccan Arabic (MA)", "Modern Standard Arabic (MSA)", "dataset:Goud/Goud-sum", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
Goud
null
Goud/DziriBERT-summarization-goud
3
1
transformers
22,244
--- datasets: - Goud/Goud-sum language: - "Moroccan Arabic (MA)" - "Modern Standard Arabic (MSA)" metrics: - rouge tags: - summarization widget: - text: "توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت. وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير. ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها. ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة. وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”. وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي. وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا. " --- This model was introduced in [this paper](https://openreview.net/forum?id=BMVq5MELb9). It is an encoder-decoder model that was initialized with [DziriBERT](https://huggingface.co/alger-ia/dziribert) checkpoint. The model is finetuned for text summarization on [Goud dataset](https://huggingface.co/datasets/Goud/Goud-sum). ## How to use This is how you can use this model ```python from transformers import EncoderDecoderModel, BertTokenizer article = """توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت. وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير. ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها. ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة. وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”. وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي. وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا. """ tokenizer = BertTokenizer.from_pretrained("Goud/DziriBERT-summarization-goud") model = EncoderDecoderModel.from_pretrained("Goud/DziriBERT-summarization-goud") input_ids = tokenizer(article, return_tensors="pt", truncation=True, padding=True).input_ids generated = model.generate(input_ids)[0] output = tokenizer.decode(generated, skip_special_tokens=True) ``` ## Citation Information ``` @inproceedings{issam2022goudma, title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija}, author={Abderrahmane Issam and Khalil Mrini}, booktitle={3rd Workshop on African Natural Language Processing}, year={2022}, url={https://openreview.net/forum?id=BMVq5MELb9} } ```
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter7
2982e53ff037decbc047f45592660ce7e5a716fb
2022-04-21T05:54:48.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
4m1g0
null
4m1g0/wav2vec2-large-xls-r-300m-gl-jupyter7
3
null
transformers
22,245
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-gl-jupyter7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-gl-jupyter7 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1004 - Wer: 0.0647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8074 | 3.36 | 400 | 0.4882 | 0.5245 | | 0.2396 | 6.72 | 800 | 0.1335 | 0.1524 | | 0.0876 | 10.08 | 1200 | 0.1216 | 0.1199 | | 0.0597 | 13.44 | 1600 | 0.1289 | 0.1241 | | 0.0449 | 16.8 | 2000 | 0.1164 | 0.1028 | | 0.0372 | 20.17 | 2400 | 0.1270 | 0.1023 | | 0.0319 | 23.53 | 2800 | 0.1111 | 0.0966 | | 0.0286 | 26.89 | 3200 | 0.1142 | 0.0925 | | 0.0246 | 30.25 | 3600 | 0.1142 | 0.0926 | | 0.0235 | 33.61 | 4000 | 0.1075 | 0.0836 | | 0.0181 | 36.97 | 4400 | 0.1083 | 0.0837 | | 0.0151 | 40.33 | 4800 | 0.1140 | 0.0768 | | 0.014 | 43.69 | 5200 | 0.1015 | 0.0748 | | 0.0111 | 47.06 | 5600 | 0.1023 | 0.0702 | | 0.0093 | 50.42 | 6000 | 0.1028 | 0.0708 | | 0.0078 | 53.78 | 6400 | 0.0999 | 0.0645 | | 0.0071 | 57.14 | 6800 | 0.1004 | 0.0647 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
cammy/led-large-16384-arxiv-1000-lit-evalMA-ga1
92a73aa12db2b625bd2bd8e1dba9c1a5637e0c12
2022-04-21T03:50:40.000Z
[ "pytorch", "tensorboard", "led", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
cammy
null
cammy/led-large-16384-arxiv-1000-lit-evalMA-ga1
3
null
transformers
22,246
Entry not found
4m1g0/wav2vec2-large-xls-r-53m-gl-jupyter6
27b691125d33115c87e1cc6217741e24c0141afc
2022-04-21T07:48:44.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
4m1g0
null
4m1g0/wav2vec2-large-xls-r-53m-gl-jupyter6
3
null
transformers
22,247
Entry not found
MeshalAlamr/wav2vec2-xls-r-300m-ar-4
858791f98a692844331d19441b2a774c9be55337
2022-04-26T04:16:51.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
MeshalAlamr
null
MeshalAlamr/wav2vec2-xls-r-300m-ar-4
3
null
transformers
22,248
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-xls-r-300m-ar-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-ar-4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7888 - Wer: 0.3697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.8069 | 1.18 | 400 | 1.7793 | 0.9883 | | 1.1949 | 2.35 | 800 | 0.9662 | 0.7908 | | 0.8996 | 3.53 | 1200 | 0.8404 | 0.7154 | | 0.7652 | 4.71 | 1600 | 0.7478 | 0.6379 | | 0.6611 | 5.88 | 2000 | 0.7687 | 0.6229 | | 0.6015 | 7.06 | 2400 | 0.7153 | 0.5948 | | 0.5444 | 8.24 | 2800 | 0.7062 | 0.5826 | | 0.4872 | 9.41 | 3200 | 0.6568 | 0.5414 | | 0.4729 | 10.59 | 3600 | 0.6817 | 0.5599 | | 0.4238 | 11.76 | 4000 | 0.6406 | 0.5262 | | 0.4022 | 12.94 | 4400 | 0.6797 | 0.5184 | | 0.3945 | 14.12 | 4800 | 0.6744 | 0.5147 | | 0.3711 | 15.29 | 5200 | 0.6807 | 0.5090 | | 0.3318 | 16.47 | 5600 | 0.6286 | 0.5011 | | 0.3132 | 17.65 | 6000 | 0.6481 | 0.4814 | | 0.2992 | 18.82 | 6400 | 0.6454 | 0.4958 | | 0.2734 | 20.0 | 6800 | 0.6465 | 0.4825 | | 0.2534 | 21.18 | 7200 | 0.6559 | 0.4658 | | 0.2505 | 22.35 | 7600 | 0.6601 | 0.4618 | | 0.2495 | 23.53 | 8000 | 0.7080 | 0.4813 | | 0.2387 | 24.71 | 8400 | 0.6635 | 0.4508 | | 0.2154 | 25.88 | 8800 | 0.6442 | 0.4538 | | 0.2096 | 27.06 | 9200 | 0.7399 | 0.4579 | | 0.2007 | 28.24 | 9600 | 0.6957 | 0.4512 | | 0.1942 | 29.41 | 10000 | 0.6642 | 0.4267 | | 0.1854 | 30.59 | 10400 | 0.6842 | 0.4393 | | 0.1782 | 31.76 | 10800 | 0.7007 | 0.4393 | | 0.1751 | 32.94 | 11200 | 0.7063 | 0.4321 | | 0.1695 | 34.12 | 11600 | 0.7057 | 0.4330 | | 0.1638 | 35.29 | 12000 | 0.7416 | 0.4266 | | 0.1531 | 36.47 | 12400 | 0.7420 | 0.4273 | | 0.1475 | 37.65 | 12800 | 0.7334 | 0.4218 | | 0.1388 | 38.82 | 13200 | 0.7420 | 0.4227 | | 0.1372 | 40.0 | 13600 | 0.7492 | 0.4238 | | 0.1341 | 41.18 | 14000 | 0.7803 | 0.4193 | | 0.133 | 42.35 | 14400 | 0.7396 | 0.4105 | | 0.1238 | 43.53 | 14800 | 0.7561 | 0.4098 | | 0.1163 | 44.71 | 15200 | 0.7987 | 0.4049 | | 0.116 | 45.88 | 15600 | 0.7769 | 0.4093 | | 0.1079 | 47.06 | 16000 | 0.7780 | 0.3986 | | 0.1043 | 48.24 | 16400 | 0.7674 | 0.3905 | | 0.1004 | 49.41 | 16800 | 0.7931 | 0.3949 | | 0.0987 | 50.59 | 17200 | 0.7605 | 0.3938 | | 0.0963 | 51.76 | 17600 | 0.7735 | 0.3858 | | 0.0905 | 52.94 | 18000 | 0.7504 | 0.3802 | | 0.086 | 54.12 | 18400 | 0.8038 | 0.3867 | | 0.0839 | 55.29 | 18800 | 0.7887 | 0.3797 | | 0.0798 | 56.47 | 19200 | 0.7832 | 0.3705 | | 0.0785 | 57.65 | 19600 | 0.7771 | 0.3706 | | 0.0765 | 58.82 | 20000 | 0.7858 | 0.3703 | | 0.0739 | 60.0 | 20400 | 0.7888 | 0.3697 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.11.0 - Datasets 1.18.3 - Tokenizers 0.10.3
satyamrajawat1994/distillbert-base-uncase-conll2003
9911798506ecb92b77db2cbd947e88b393298645
2022-04-21T13:45:10.000Z
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
satyamrajawat1994
null
satyamrajawat1994/distillbert-base-uncase-conll2003
3
null
transformers
22,249
Entry not found
satish860/sms_detection_algorithm
23972cebe7d3069972fc867f721eaca5edcbf89f
2022-04-21T16:42:17.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
satish860
null
satish860/sms_detection_algorithm
3
null
transformers
22,250
Entry not found
satish860/finetuning-sentiment-model-3000-samples
7b3031fd94662c811e4ab8b6fb057f72c446b24b
2022-04-21T17:02:49.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
satish860
null
satish860/finetuning-sentiment-model-3000-samples
3
null
transformers
22,251
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0454 - Accuracy: 0.9886 - F1: 0.9571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.1.0 - Tokenizers 0.12.1
buddhist-nlp/sanstib
553a59b9dc1a966428f36e2b97adfc7ca4937251
2022-04-22T08:41:48.000Z
[ "pytorch", "roberta", "feature-extraction", "transformers", "license:lgpl-lr" ]
feature-extraction
false
buddhist-nlp
null
buddhist-nlp/sanstib
3
null
transformers
22,252
--- license: lgpl-lr ---
tingzhou/finetuning_test
ff8701ed49db84b166ad1d1cf7280a2853cfeb8e
2022-04-23T14:27:09.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
tingzhou
null
tingzhou/finetuning_test
3
null
transformers
22,253
Entry not found
cj-mills/bert-base-uncased-issues-128
0f890d1bccd1e84155c2c1b00d39ffdcf56dc39c
2022-04-22T18:29:07.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
cj-mills
null
cj-mills/bert-base-uncased-issues-128
3
null
transformers
22,254
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1071 | 1.0 | 291 | 1.6964 | | 1.6421 | 2.0 | 582 | 1.4279 | | 1.4853 | 3.0 | 873 | 1.3924 | | 1.4014 | 4.0 | 1164 | 1.3701 | | 1.3388 | 5.0 | 1455 | 1.1944 | | 1.283 | 6.0 | 1746 | 1.2795 | | 1.2394 | 7.0 | 2037 | 1.2671 | | 1.2014 | 8.0 | 2328 | 1.2084 | | 1.1668 | 9.0 | 2619 | 1.1783 | | 1.14 | 10.0 | 2910 | 1.2076 | | 1.1277 | 11.0 | 3201 | 1.2081 | | 1.1053 | 12.0 | 3492 | 1.1628 | | 1.0819 | 13.0 | 3783 | 1.2544 | | 1.0763 | 14.0 | 4074 | 1.1695 | | 1.0634 | 15.0 | 4365 | 1.1157 | | 1.0637 | 16.0 | 4656 | 1.2526 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
lucaordronneau/finbert-finetuned-FG-SINGLE_SENTENCE-NEWS
3523f2d0a6a950009b01de7e2c53a564880b46d0
2022-05-03T09:58:12.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "model-index" ]
text-classification
false
lucaordronneau
null
lucaordronneau/finbert-finetuned-FG-SINGLE_SENTENCE-NEWS
3
null
transformers
22,255
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finbert-finetuned-FG-SINGLE_SENTENCE-NEWS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finbert-finetuned-FG-SINGLE_SENTENCE-NEWS This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2997 - Accuracy: 0.6414 - F1: 0.6295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 321 | 0.9371 | 0.5699 | 0.4333 | | 0.9282 | 2.0 | 642 | 0.9135 | 0.5930 | 0.5447 | | 0.9282 | 3.0 | 963 | 0.9900 | 0.6033 | 0.5823 | | 0.6743 | 4.0 | 1284 | 1.0802 | 0.6142 | 0.6065 | | 0.3134 | 5.0 | 1605 | 1.5156 | 0.6183 | 0.5971 | | 0.3134 | 6.0 | 1926 | 1.3695 | 0.6319 | 0.6183 | | 0.1709 | 7.0 | 2247 | 1.8746 | 0.6462 | 0.6267 | | 0.1112 | 8.0 | 2568 | 2.0880 | 0.6176 | 0.6155 | | 0.1112 | 9.0 | 2889 | 2.3953 | 0.6190 | 0.6087 | | 0.0811 | 10.0 | 3210 | 2.3792 | 0.6339 | 0.6225 | | 0.0608 | 11.0 | 3531 | 2.3783 | 0.6360 | 0.6282 | | 0.0608 | 12.0 | 3852 | 2.5982 | 0.6544 | 0.6351 | | 0.039 | 13.0 | 4173 | 2.7687 | 0.6346 | 0.6305 | | 0.039 | 14.0 | 4494 | 2.8980 | 0.6414 | 0.6299 | | 0.0206 | 15.0 | 4815 | 3.0858 | 0.6319 | 0.6253 | | 0.0168 | 16.0 | 5136 | 3.2408 | 0.6244 | 0.6170 | | 0.0168 | 17.0 | 5457 | 3.1809 | 0.6435 | 0.6293 | | 0.0123 | 18.0 | 5778 | 3.2629 | 0.6449 | 0.6324 | | 0.0055 | 19.0 | 6099 | 3.2866 | 0.6449 | 0.6308 | | 0.0055 | 20.0 | 6420 | 3.2997 | 0.6414 | 0.6295 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
huggingtweets/it_its_are_are
1942a1c79c325857fc0cac3514e017c3472645af
2022-05-02T22:36:16.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/it_its_are_are
3
null
transformers
22,256
--- language: en thumbnail: http://www.huggingtweets.com/it_its_are_are/1651530971798/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1480214799539740676/S3W8I0f2_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">angelicism2727272628</div> <div style="text-align: center; font-size: 14px;">@it_its_are_are</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from angelicism2727272628. | Data | angelicism2727272628 | | --- | --- | | Tweets downloaded | 229 | | Retweets | 35 | | Short tweets | 20 | | Tweets kept | 174 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1p6kjacr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @it_its_are_are's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/sou4cazg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/sou4cazg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/it_its_are_are') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ahmeddbahaa/mt5-base-finetuned-ar-wikilingua
b3f81f1525fdb3512bdc7a7e8d39a086fe9bdf99
2022-04-23T14:21:41.000Z
[ "pytorch", "mt5", "text2text-generation", "dataset:wiki_lingua", "transformers", "summarization", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
ahmeddbahaa
null
ahmeddbahaa/mt5-base-finetuned-ar-wikilingua
3
null
transformers
22,257
--- license: apache-2.0 tags: - summarization - generated_from_trainer datasets: - wiki_lingua model-index: - name: mt5-base-finetuned-ar-wikilingua results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-ar-wikilingua This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset. It achieves the following results on the evaluation set: - Loss: 3.6790 - Rouge-1: 19.46 - Rouge-2: 6.82 - Rouge-l: 17.57 - Gen Len: 18.83 - Bertscore: 70.18 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 8 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.9783 | 1.0 | 5111 | 4.0107 | 15.8 | 4.65 | 14.18 | 18.98 | 68.66 | | 4.2093 | 2.0 | 10222 | 3.8664 | 16.46 | 5.17 | 15.08 | 18.91 | 68.5 | | 4.0303 | 3.0 | 15333 | 3.7847 | 17.0 | 5.43 | 15.45 | 18.89 | 68.75 | | 3.9165 | 4.0 | 20444 | 3.7405 | 17.03 | 5.5 | 15.45 | 18.86 | 68.78 | | 3.8396 | 5.0 | 25555 | 3.7102 | 17.14 | 5.57 | 15.48 | 18.87 | 68.92 | | 3.7825 | 6.0 | 30666 | 3.6944 | 17.64 | 5.73 | 15.96 | 18.82 | 69.14 | | 3.7447 | 7.0 | 35777 | 3.6801 | 17.6 | 5.66 | 15.9 | 18.78 | 69.23 | | 3.7203 | 8.0 | 40888 | 3.6790 | 17.94 | 5.81 | 16.21 | 18.81 | 69.29 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
omar47/wav2vec2-large-xls-r-300m-urdu-common_voice_8_0
3b45ba5fe7f041c7131bc40db6a918fb541d41fa
2022-04-23T23:23:58.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
omar47
null
omar47/wav2vec2-large-xls-r-300m-urdu-common_voice_8_0
3
null
transformers
22,258
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-urdu-common_voice_8_0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-urdu-common_voice_8_0 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.3860 - Wer: 0.7546 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 14 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5253 | 1.27 | 32 | 1.3860 | 0.7546 | | 0.524 | 2.55 | 64 | 1.3860 | 0.7546 | | 0.5197 | 3.82 | 96 | 1.3860 | 0.7546 | | 0.523 | 5.12 | 128 | 1.3860 | 0.7546 | | 0.5224 | 6.39 | 160 | 1.3860 | 0.7546 | | 0.5332 | 7.67 | 192 | 1.3860 | 0.7546 | | 0.5227 | 8.94 | 224 | 1.3860 | 0.7546 | | 0.5272 | 10.24 | 256 | 1.3860 | 0.7546 | | 0.5294 | 11.51 | 288 | 1.3860 | 0.7546 | | 0.5146 | 12.78 | 320 | 1.3860 | 0.7546 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.12.1
HJHGJGHHG/GAU-Base-Full
f5c867ca172749dc38daea80ff3947261779c02a
2022-04-24T09:07:58.000Z
[ "pytorch", "gau", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
HJHGJGHHG
null
HJHGJGHHG/GAU-Base-Full
3
null
transformers
22,259
Entry not found
tingzhou/cn_finetuning
eecd2f2d2c5e2ce02acca3d79aaed7d929f77304
2022-04-24T14:11:45.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
tingzhou
null
tingzhou/cn_finetuning
3
null
transformers
22,260
Entry not found
domenicrosati/t5-small-finetuned-contradiction-finetuned-contradiction
da262f293f592208fe6c6b6cef8b0b65708f97fb
2022-04-24T14:55:23.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
domenicrosati
null
domenicrosati/t5-small-finetuned-contradiction-finetuned-contradiction
3
null
transformers
22,261
Entry not found
fxxcyz/distilbert-base-uncased-finetuned-cola
e117104b4821b31155570bab99f4fe613ea7b9da
2022-04-24T18:11:52.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
fxxcyz
null
fxxcyz/distilbert-base-uncased-finetuned-cola
3
null
transformers
22,262
Entry not found
Felix92/doctr-torch-db-mobilenet-v3-large
c6e0a46f8dc464ff96561d74ee9bbaf895d2e15c
2022-04-24T20:25:41.000Z
[ "pytorch", "en", "transformers" ]
null
false
Felix92
null
Felix92/doctr-torch-db-mobilenet-v3-large
3
null
transformers
22,263
--- language: en --- <p align="center"> <img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%"> </p> **Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch** ## Task: detection https://github.com/mindee/doctr ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', >>> reco_arch=model, >>> pretrained=True) >>> # If your model is a detection model: >>> predictor = ocr_predictor(det_arch=model, >>> reco_arch='crnn_mobilenet_v3_small', >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ```
chrishuber/roberta-kaggledev-testing
516cdbef69a80c73a8b046d1a30a94783b504379
2022-04-25T00:05:27.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
chrishuber
null
chrishuber/roberta-kaggledev-testing
3
null
transformers
22,264
Entry not found
wildsheepchaser/distilbert-base-uncased-finetuned-cola
07bf98c1648f1ef339e33aa3b7d79e9ceb848dae
2022-04-25T01:25:27.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
wildsheepchaser
null
wildsheepchaser/distilbert-base-uncased-finetuned-cola
3
null
transformers
22,265
Entry not found
PSW/random_sim_del
661fcb4c845dd7f1e20ab89141abf1cd3e6da312
2022-04-25T03:15:57.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/random_sim_del
3
null
transformers
22,266
Entry not found
sai82/distilbert-base-uncased-finetuned-emotion
ff2079567088ecdb6063f464fb3cdff1ddce8f29
2022-04-25T03:11:27.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
sai82
null
sai82/distilbert-base-uncased-finetuned-emotion
3
null
transformers
22,267
Entry not found
Real29/my-model-nela
97a12e2f8ac1c7d4c47343115ba190e15eac7b6f
2022-04-25T18:47:03.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Real29
null
Real29/my-model-nela
3
null
transformers
22,268
Entry not found
PSW/max_sim_ins_seed1
b9f1ef8b7e1b71e0c1e7ad13b4d84c82237972a2
2022-04-25T10:38:59.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/max_sim_ins_seed1
3
null
transformers
22,269
Entry not found
maximedb/glue_sst_classifier
db6a380a282ffef649ded3193ecd008690a3613c
2022-04-25T19:42:10.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
maximedb
null
maximedb/glue_sst_classifier
3
null
transformers
22,270
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - f1 - accuracy model-index: - name: glue_sst_classifier results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: F1 type: f1 value: 0.9033707865168539 - name: Accuracy type: accuracy value: 0.9013761467889908 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # glue_sst_classifier This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2359 - F1: 0.9034 - Accuracy: 0.9014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 | | 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 | | 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 | | 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 | | 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ddobokki/unsup-simcse-klue-roberta-base
c42c6889c5a8dd71f93ae0149b78f6090db129d6
2022-04-26T05:22:12.000Z
[ "pytorch", "roberta", "ko", "transformers", "simcse" ]
null
false
ddobokki
null
ddobokki/unsup-simcse-klue-roberta-base
3
null
transformers
22,271
--- language: - ko tags: - simcse --- # KorSTS-dev ``` "eval_cosine_pearson": 0.8461074829101562 "eval_cosine_spearman": 0.8447369732456155 "eval_euclidean_pearson": 0.8401166200637817 "eval_euclidean_spearman": 0.8441547920405729 "eval_manhattan_pearson": 0.8404706120491028 "eval_manhattan_spearman": 0.8449217524976507 "eval_dot_pearson": 0.8457739353179932 "eval_dot_spearman": 0.8440466726739222 ``` # KorSTS-test ``` "eval_cosine_pearson": 0.7702209949493408 "eval_cosine_spearman": 0.7671020822573297 "eval_euclidean_pearson": 0.7617944478988647 "eval_euclidean_spearman": 0.7651634975965186 "eval_manhattan_pearson": 0.7639209032058716 "eval_manhattan_spearman": 0.7674607376361398 "eval_dot_pearson": 0.7696021795272827 "eval_dot_spearman": 0.7667385347139427 ```
Real29/my-model-proppy
cabad8f23e2198ec0f5aeecc4f15262121b2b786
2022-04-26T10:32:43.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Real29
null
Real29/my-model-proppy
3
null
transformers
22,272
Entry not found
Real29/my-model-jacobs
4ab0e4d76c96889a789ee502b962d972c6bd360b
2022-04-26T14:26:01.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Real29
null
Real29/my-model-jacobs
3
null
transformers
22,273
Entry not found
plowcow/distilbert-base-uncased-finetuned-emotion
98bf27d8c650147f4b3efa13fb83fd2979b31772
2022-06-21T04:09:25.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
plowcow
null
plowcow/distilbert-base-uncased-finetuned-emotion
3
null
transformers
22,274
Entry not found
Caroline-Vandyck/reviews-generator
e1a0b9cda0070e65134b04b3824c026ba26a639d
2022-04-26T12:58:01.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "dataset:amazon_reviews_multi", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
Caroline-Vandyck
null
Caroline-Vandyck/reviews-generator
3
null
transformers
22,275
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: reviews-generator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reviews-generator This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 3.4990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7955 | 0.08 | 500 | 3.5577 | | 3.7495 | 0.16 | 1000 | 3.4990 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
rahulgkatre/DialoGPT-marge
8a3de1c000af958866d425f56c767d3a2d355dd8
2022-04-27T03:21:00.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
rahulgkatre
null
rahulgkatre/DialoGPT-marge
3
null
transformers
22,276
Entry not found
PSW/random_sim_ins2_seed27
a3d44620ad6f81ddb723894536d11fd6b44496e0
2022-04-27T03:24:41.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/random_sim_ins2_seed27
3
null
transformers
22,277
Entry not found
manueltonneau/bert-twitter-pt-is-unemployed
f361fcb96ad8acf771220be33adc1edfea99c11c
2022-04-27T09:07:42.000Z
[ "pytorch", "bert", "text-classification", "pt", "arxiv:2203.09178", "transformers" ]
text-classification
false
manueltonneau
null
manueltonneau/bert-twitter-pt-is-unemployed
3
null
transformers
22,278
--- language: pt # <-- my language widget: - text: "Tô desempregada!" --- # Detection of employment status disclosures on Twitter ## Model main characteristics: - class: Is Unemployed (1), else (0) - country: BR - language: Portuguese - architecture: BERT base ## Model description This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets where a user mentions that she is currently unemployed. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with: - the positive class referring to tweets mentioning that a user is currently unemployed (label=1) - the negative class referring to all other tweets (label=0) ## Resources The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment). Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178). ## Citation If you find this model useful, please cite our paper (citation to come soon).
fxmarty/resnet-tiny-mnist
d64012c39d4ea6fad998cca3bad7fbf7987709ef
2022-04-27T09:27:58.000Z
[ "pytorch", "resnet", "image-classification", "transformers", "license:gpl-3.0" ]
image-classification
false
fxmarty
null
fxmarty/resnet-tiny-mnist
3
null
transformers
22,279
--- license: gpl-3.0 --- A small Resnet model for MNIST. Achieves 0.985 accuracy on the validation set.
PSW/random_sim_swap_seed1
8cff3a861574f61d9c6934a3a18d8066e078dfb4
2022-04-27T09:49:18.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/random_sim_swap_seed1
3
null
transformers
22,280
Entry not found
Prinernian/xlm-roberta-base-finetuned-panx-de
5e4b75b157e4cd14f13ed09ffe8488d1466a2c2e
2022-05-18T19:30:06.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Prinernian
null
Prinernian/xlm-roberta-base-finetuned-panx-de
3
null
transformers
22,281
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8588964027959312 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1383 - F1: 0.8589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2631 | 1.0 | 525 | 0.1596 | 0.8218 | | 0.1296 | 2.0 | 1050 | 0.1353 | 0.8479 | | 0.0821 | 3.0 | 1575 | 0.1383 | 0.8589 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
anton-l/xtreme_s_xlsr_300m_fleurs_asr_western_european
d8f109ba4c0a717d78317d52b87b2c05afc14270
2022-04-28T09:56:22.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "all", "dataset:google/xtreme_s", "transformers", "fleurs-asr", "google/xtreme_s", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
anton-l
null
anton-l/xtreme_s_xlsr_300m_fleurs_asr_western_european
3
null
transformers
22,282
--- language: - all license: apache-2.0 tags: - fleurs-asr - google/xtreme_s - generated_from_trainer datasets: - google/xtreme_s model-index: - name: xtreme_s_xlsr_300m_fleurs_asr_western_european results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtreme_s_xlsr_300m_fleurs_asr_western_european This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - FLEURS.ALL dataset. It achieves the following results on the evaluation set: - Cer: 0.2484 - Cer Ast Es: 0.1598 - Cer Bs Ba: 0.1749 - Cer Ca Es: 0.1655 - Cer Cy Gb: 0.2280 - Cer Da Dk: 0.3616 - Cer De De: 0.1287 - Cer El Gr: 0.6020 - Cer En Us: 0.1938 - Cer Es 419: 0.1288 - Cer Fi Fi: 0.2050 - Cer Fr Fr: 0.1811 - Cer Ga Ie: 0.4474 - Cer Gl Es: 0.1324 - Cer Hr Hr: 0.1555 - Cer Hu Hu: 0.3911 - Cer Is Is: 0.4646 - Cer It It: 0.1283 - Cer Kea Cv: 0.1818 - Cer Lb Lu: 0.2594 - Cer Mt Mt: 0.3628 - Cer Nb No: 0.2254 - Cer Nl Nl: 0.1790 - Cer Oci Fr: 0.2159 - Cer Pt Br: 0.2275 - Cer Sv Se: 0.3092 - Loss: 1.3089 - Loss Ast Es: 0.7715 - Loss Bs Ba: 0.7378 - Loss Ca Es: 0.7868 - Loss Cy Gb: 1.1441 - Loss Da Dk: 1.9130 - Loss De De: 0.5391 - Loss El Gr: 3.4904 - Loss En Us: 0.9632 - Loss Es 419: 0.6186 - Loss Fi Fi: 0.8953 - Loss Fr Fr: 0.9076 - Loss Ga Ie: 3.0217 - Loss Gl Es: 0.5788 - Loss Hr Hr: 0.6462 - Loss Hu Hu: 1.9029 - Loss Is Is: 2.6551 - Loss It It: 0.6052 - Loss Kea Cv: 0.9107 - Loss Lb Lu: 1.3705 - Loss Mt Mt: 2.3651 - Loss Nb No: 1.1518 - Loss Nl Nl: 0.8490 - Loss Oci Fr: 1.1421 - Loss Pt Br: 1.1641 - Loss Sv Se: 1.5910 - Wer: 0.6451 - Wer Ast Es: 0.4654 - Wer Bs Ba: 0.5443 - Wer Ca Es: 0.4979 - Wer Cy Gb: 0.5962 - Wer Da Dk: 0.8455 - Wer De De: 0.4221 - Wer El Gr: 0.9805 - Wer En Us: 0.4556 - Wer Es 419: 0.3928 - Wer Fi Fi: 0.8116 - Wer Fr Fr: 0.4690 - Wer Ga Ie: 0.8519 - Wer Gl Es: 0.4245 - Wer Hr Hr: 0.4895 - Wer Hu Hu: 0.9099 - Wer Is Is: 0.9960 - Wer It It: 0.4415 - Wer Kea Cv: 0.5202 - Wer Lb Lu: 0.7225 - Wer Mt Mt: 1.0096 - Wer Nb No: 0.6541 - Wer Nl Nl: 0.5257 - Wer Oci Fr: 0.5770 - Wer Pt Br: 0.6685 - Wer Sv Se: 0.8546 - Predict Samples: 20043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 3.1411 | 0.49 | 500 | 3.1673 | 1.0 | 1.0 | | 0.6397 | 0.97 | 1000 | 0.9039 | 0.7171 | 0.2862 | | 0.4033 | 1.46 | 1500 | 0.8914 | 0.6862 | 0.2763 | | 0.3473 | 1.94 | 2000 | 0.8017 | 0.6505 | 0.2536 | | 0.3143 | 2.43 | 2500 | 0.8568 | 0.6566 | 0.2627 | | 0.3004 | 2.91 | 3000 | 0.8898 | 0.6640 | 0.2686 | | 0.282 | 3.4 | 3500 | 0.8489 | 0.6637 | 0.2571 | | 0.2489 | 3.88 | 4000 | 0.8955 | 0.6744 | 0.2691 | | 0.1706 | 4.37 | 4500 | 0.9190 | 0.6788 | 0.2688 | | 0.3336 | 4.85 | 5000 | 0.8915 | 0.6594 | 0.2572 | | 0.1426 | 5.34 | 5500 | 0.9501 | 0.6784 | 0.2686 | | 0.2301 | 5.83 | 6000 | 1.0217 | 0.6719 | 0.2735 | | 0.1325 | 6.31 | 6500 | 0.9578 | 0.6691 | 0.2655 | | 0.1145 | 6.8 | 7000 | 0.9129 | 0.6680 | 0.2593 | | 0.1202 | 7.28 | 7500 | 0.9646 | 0.6749 | 0.2619 | | 0.143 | 7.77 | 8000 | 0.9200 | 0.6554 | 0.2554 | | 0.1012 | 8.25 | 8500 | 0.9553 | 0.6787 | 0.2628 | | 0.1018 | 8.74 | 9000 | 0.9455 | 0.6445 | 0.2511 | | 0.1148 | 9.22 | 9500 | 1.0206 | 0.6725 | 0.2629 | | 0.0794 | 9.71 | 10000 | 0.9305 | 0.6547 | 0.2526 | | 0.2891 | 10.19 | 10500 | 1.0424 | 0.6709 | 0.2570 | | 0.1665 | 10.68 | 11000 | 0.9760 | 0.6596 | 0.2507 | | 0.1956 | 11.17 | 11500 | 0.9549 | 0.6340 | 0.2440 | | 0.0828 | 11.65 | 12000 | 0.9598 | 0.6403 | 0.2460 | | 0.059 | 12.14 | 12500 | 0.9972 | 0.6574 | 0.2531 | | 0.0505 | 12.62 | 13000 | 0.9836 | 0.6534 | 0.2525 | | 0.0336 | 13.11 | 13500 | 1.0619 | 0.6564 | 0.2519 | | 0.0435 | 13.59 | 14000 | 1.0844 | 0.6480 | 0.2543 | | 0.0216 | 14.08 | 14500 | 1.1084 | 0.6512 | 0.2521 | | 0.0265 | 14.56 | 15000 | 1.1152 | 0.6607 | 0.2563 | | 0.0975 | 15.05 | 15500 | 1.1060 | 0.6456 | 0.2471 | | 0.1396 | 15.53 | 16000 | 1.1100 | 0.6337 | 0.2418 | | 0.0701 | 16.02 | 16500 | 1.1731 | 0.6309 | 0.2415 | | 0.1171 | 16.5 | 17000 | 1.1302 | 0.6315 | 0.2396 | | 0.0778 | 16.99 | 17500 | 1.1485 | 0.6379 | 0.2447 | | 0.0642 | 17.48 | 18000 | 1.2009 | 0.6400 | 0.2464 | | 0.0322 | 17.96 | 18500 | 1.2028 | 0.6357 | 0.2425 | | 0.031 | 18.45 | 19000 | 1.2381 | 0.6285 | 0.2416 | | 0.0579 | 18.93 | 19500 | 1.2299 | 0.6265 | 0.2409 | | 0.0628 | 19.42 | 20000 | 1.2582 | 0.6277 | 0.2395 | | 0.074 | 19.9 | 20500 | 1.2572 | 0.6278 | 0.2394 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.1+cu111 - Datasets 1.18.4.dev0 - Tokenizers 0.11.6
ajtamayoh/roberta-large-finetuned-ADEs_model_2
6c3b6c11c22a90852199ad95432ef2d1428c21a8
2022-04-27T21:33:50.000Z
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
ajtamayoh
null
ajtamayoh/roberta-large-finetuned-ADEs_model_2
3
null
transformers
22,283
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-large-finetuned-ADEs_model_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-finetuned-ADEs_model_2 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2580 - Precision: 0.5407 - Recall: 0.6311 - F1: 0.5824 - Accuracy: 0.8897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7461 | 1.0 | 640 | 0.3393 | 0.4247 | 0.5095 | 0.4633 | 0.8648 | | 0.3632 | 2.0 | 1280 | 0.2822 | 0.4934 | 0.6035 | 0.5429 | 0.8819 | | 0.3102 | 3.0 | 1920 | 0.2663 | 0.5218 | 0.6112 | 0.5630 | 0.8879 | | 0.2806 | 4.0 | 2560 | 0.2604 | 0.5337 | 0.6311 | 0.5783 | 0.8890 | | 0.2772 | 5.0 | 3200 | 0.2580 | 0.5407 | 0.6311 | 0.5824 | 0.8897 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
huggingtweets/afraidofwasps-dril-senn_spud
a88786922f7b4a04e16359e053b008a67993afcc
2022-06-07T21:10:15.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/afraidofwasps-dril-senn_spud
3
null
transformers
22,284
--- language: en thumbnail: http://www.huggingtweets.com/afraidofwasps-dril-senn_spud/1654636210975/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1387151448203358209/HKNuKY7L_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1182478458552832000/xqEwluRJ_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Will Sennett & Boots, 'with the fur'</div> <div style="text-align: center; font-size: 14px;">@afraidofwasps-dril-senn_spud</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Will Sennett & Boots, 'with the fur'. | Data | wint | Will Sennett | Boots, 'with the fur' | | --- | --- | --- | --- | | Tweets downloaded | 3230 | 3228 | 3217 | | Retweets | 487 | 312 | 504 | | Short tweets | 297 | 622 | 434 | | Tweets kept | 2446 | 2294 | 2279 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/156iladp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afraidofwasps-dril-senn_spud's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6g2dktc9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6g2dktc9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/afraidofwasps-dril-senn_spud') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
princeton-nlp/efficient_mlm_m0.50
0f356f1fd8642661fc4d9c70544c9e5356059447
2022-04-28T18:58:09.000Z
[ "pytorch", "roberta", "fill-mask", "arxiv:2202.08005", "transformers", "autotrain_compatible" ]
fill-mask
false
princeton-nlp
null
princeton-nlp/efficient_mlm_m0.50
3
null
transformers
22,285
--- inference: false --- This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example, ``` bash from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification ```
aditeyabaral/sonobois
9e8801d585d4a2fe32ea91a08c61d2215c56177c
2022-04-29T07:32:56.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
aditeyabaral
null
aditeyabaral/sonobois
3
null
transformers
22,286
--- tags: - conversational --- # Model trained on sonobois convos
zhiguoxu/distilbert-base-uncased-finetuned-emotion
6775925218c2451103f53acb76f3855771dce208
2022-04-29T11:59:42.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
zhiguoxu
null
zhiguoxu/distilbert-base-uncased-finetuned-emotion
3
null
transformers
22,287
Entry not found
doc2query/msmarco-arabic-mt5-base-v1
aec7e95a7efcdf32b81048b70f81c814e2d6a899
2022-04-29T11:42:59.000Z
[ "pytorch", "mt5", "text2text-generation", "ar", "dataset:unicamp-dl/mmarco", "arxiv:1904.08375", "arxiv:2104.08663", "arxiv:2112.07577", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
doc2query
null
doc2query/msmarco-arabic-mt5-base-v1
3
null
transformers
22,288
--- language: ar datasets: - unicamp-dl/mmarco widget: - text: "بايثون (بالإنجليزية: Python)‏ هي لغة برمجة، عالية المستوى سهلة التعلم مفتوحة المصدر قابلة للتوسيع، تعتمد أسلوب البرمجة الكائنية (OOP). لغة بايثون هي لغة مُفسَّرة، ومُتعدِدة الاستخدامات، وتستخدم بشكل واسع في العديد من المجالات، كبناء البرامج المستقلة باستخدام الواجهات الرسومية وفي تطبيقات الويب، ويمكن استخدامها كلغة برمجة نصية للتحكم في أداء العديد من البرمجيات مثل بلندر. بشكل عام، يمكن استخدام بايثون لعمل البرامج البسيطة للمبتدئين، ولإنجاز المشاريع الضخمة في الوقت نفسه. غالباً ما يُنصح المبتدؤون في ميدان البرمجة بتعلم هذه اللغة لأنها من بين أسرع اللغات البرمجية تعلماً." license: apache-2.0 --- # doc2query/msmarco-arabic-mt5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = 'doc2query/msmarco-arabic-mt5-base-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) text = "بايثون (بالإنجليزية: Python)‏ هي لغة برمجة، عالية المستوى سهلة التعلم مفتوحة المصدر قابلة للتوسيع، تعتمد أسلوب البرمجة الكائنية (OOP). لغة بايثون هي لغة مُفسَّرة، ومُتعدِدة الاستخدامات، وتستخدم بشكل واسع في العديد من المجالات، كبناء البرامج المستقلة باستخدام الواجهات الرسومية وفي تطبيقات الويب، ويمكن استخدامها كلغة برمجة نصية للتحكم في أداء العديد من البرمجيات مثل بلندر. بشكل عام، يمكن استخدام بايثون لعمل البرامج البسيطة للمبتدئين، ولإنجاز المشاريع الضخمة في الوقت نفسه. غالباً ما يُنصح المبتدؤون في ميدان البرمجة بتعلم هذه اللغة لأنها من بين أسرع اللغات البرمجية تعلماً." def create_queries(para): input_ids = tokenizer.encode(para, return_tensors='pt') with torch.no_grad(): # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality sampling_outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, top_k=10, num_return_sequences=5 ) # Here we use Beam-search. It generates better quality queries, but with less diversity beam_outputs = model.generate( input_ids=input_ids, max_length=64, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) print("Paragraph:") print(para) print("\nBeam Outputs:") for i in range(len(beam_outputs)): query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') print("\nSampling Outputs:") for i in range(len(sampling_outputs)): query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') create_queries(text) ``` **Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it. ## Training This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
scasutt/wav2vec2-large-xlsr-53_full_final_train
1a85afba2d9d33dbfa5fb9144e7f988dc9b00484
2022-05-07T11:52:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
scasutt
null
scasutt/wav2vec2-large-xlsr-53_full_final_train
3
null
transformers
22,289
Entry not found
csikasote/xlsr-53-bemba-5hrs
070c9c786f21ded5a810a1f47f75241d4954be41
2022-04-29T23:40:17.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
csikasote
null
csikasote/xlsr-53-bemba-5hrs
3
null
transformers
22,290
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: xlsr-53-bemba-5hrs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlsr-53-bemba-5hrs This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3414 - Wer: 0.4867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2701 | 2.16 | 400 | 0.4047 | 0.6230 | | 0.488 | 4.32 | 800 | 0.3002 | 0.4917 | | 0.2807 | 6.49 | 1200 | 0.3342 | 0.4802 | | 0.1696 | 8.65 | 1600 | 0.3414 | 0.4867 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
tonydiana1/distilroberta-base-finetuned-wikitext2
41e637ada53acd14783b113addbaeb22c40b6319
2022-04-30T01:23:18.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
tonydiana1
null
tonydiana1/distilroberta-base-finetuned-wikitext2
3
null
transformers
22,291
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0853 | 1.0 | 2406 | 1.9214 | | 1.986 | 2.0 | 4812 | 1.8799 | | 1.9568 | 3.0 | 7218 | 1.8202 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
jacklindsai/is_it_elon_musk
292bc19761b4acd3dd28c35188c7083db1bf07e7
2022-04-30T05:33:23.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
jacklindsai
null
jacklindsai/is_it_elon_musk
3
null
transformers
22,292
Entry not found
dyyyyyyyy/xTune_squad_XLM-RoBERTa-large
7bffd381d0027673788b6a7bd23677d1fc82a125
2022-04-30T09:01:23.000Z
[ "pytorch", "xlm-roberta", "transformers" ]
null
false
dyyyyyyyy
null
dyyyyyyyy/xTune_squad_XLM-RoBERTa-large
3
null
transformers
22,293
Entry not found
shumail/wav2vec2-base-timit-demo-colab
2c895c92f1c2b442b445bc08e60df9c03452dd57
2022-05-01T07:13:08.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
shumail
null
shumail/wav2vec2-base-timit-demo-colab
3
null
transformers
22,294
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8686 - Wer: 0.6263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0505 | 13.89 | 500 | 3.0760 | 1.0 | | 1.2748 | 27.78 | 1000 | 0.8686 | 0.6263 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ali221000262/wav2vec2-base-timit-demo-colab
690b91842210741af5dcf82684fa67aded64e266
2022-04-30T18:01:43.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
ali221000262
null
ali221000262/wav2vec2-base-timit-demo-colab
3
null
transformers
22,295
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [ali221000262/wav2vec2-base-timit-demo-colab](https://huggingface.co/ali221000262/wav2vec2-base-timit-demo-colab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2161 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 2.6432 | 13.89 | 500 | 3.2161 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
doddle124578/wav2vec2-base-timit-demo-colab-1
ba1b20d7f45c423f2377d3ccf281d9058a2225d7
2022-05-01T12:53:33.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
doddle124578
null
doddle124578/wav2vec2-base-timit-demo-colab-1
3
null
transformers
22,296
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab-1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6513 - Wer: 0.5544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.6074 | 8.77 | 500 | 3.1529 | 1.0 | | 1.3204 | 17.54 | 1000 | 0.6513 | 0.5544 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
doddle124578/wav2vec2-base-timit-demo-colab-2
134e01305d35da523f0ec0e1348d5975af70785b
2022-04-30T18:57:05.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
doddle124578
null
doddle124578/wav2vec2-base-timit-demo-colab-2
3
null
transformers
22,297
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab-2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7429 - Wer: 0.5080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 900 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.984 | 8.77 | 500 | 0.9028 | 0.7036 | | 0.6412 | 17.54 | 1000 | 0.7275 | 0.5868 | | 0.3073 | 26.32 | 1500 | 0.7429 | 0.5080 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Worldman/pegasus-samsum
d179db9465e6270c6beb4f9c145f66dfd3fafc90
2022-04-30T23:42:21.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "dataset:samsum", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
Worldman
null
Worldman/pegasus-samsum
3
null
transformers
22,298
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7073 | 0.54 | 500 | 1.4841 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ouyh18/distilbert-base-uncased-finetuned-cola
71a064976beb51da2b8d0bc13e204b861cb37753
2022-05-01T03:43:35.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
ouyh18
null
ouyh18/distilbert-base-uncased-finetuned-cola
3
null
transformers
22,299
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5500173690801187 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8456 - Matthews Correlation: 0.5500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5197 | 1.0 | 535 | 0.5477 | 0.4130 | | 0.3456 | 2.0 | 1070 | 0.5035 | 0.5239 | | 0.2342 | 3.0 | 1605 | 0.6100 | 0.5285 | | 0.1698 | 4.0 | 2140 | 0.7556 | 0.5456 | | 0.1295 | 5.0 | 2675 | 0.8456 | 0.5500 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1