modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
BigSalmon/InformalToFormalLincoln32
e48683a1a97e013219fb90ed35f96054e1702e70
2022-03-28T00:48:58.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/InformalToFormalLincoln32
1
null
transformers
31,000
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln32") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln32") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ```
CallForEcho/DialoGPT-small-harrypotter
6c54aa94daacfacb22876e4ab6966c489de2f1e7
2022-03-28T00:42:12.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
CallForEcho
null
CallForEcho/DialoGPT-small-harrypotter
1
null
transformers
31,001
--- tags: - conversational --- # harry Potter DialoGPT Model
dapang/shuxue-wiki-basic-factor-pair-medium-10257
c9c6a9c7cc90cae8bd8deb712b68f4d2b729cd00
2022-03-28T12:05:34.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
dapang
null
dapang/shuxue-wiki-basic-factor-pair-medium-10257
1
null
transformers
31,002
Entry not found
tau/t5_lm_4_1024_0.3_epoch1
43d8aace8e0debb6ec190849784f6957e7ff87a7
2022-03-28T04:40:04.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tau
null
tau/t5_lm_4_1024_0.3_epoch1
1
null
transformers
31,003
Entry not found
dapang/shuxue-wiki-10257
3fcee09aec5637892cab385c1faeff561d9713c2
2022-03-28T22:21:08.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
dapang
null
dapang/shuxue-wiki-10257
1
null
transformers
31,004
Entry not found
dennisowusuk/wav2vec2-large-xls-r-300m-turkish-colab
0555cf3b0496aca600919814acb206494f2ffe22
2022-03-28T13:28:30.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
dennisowusuk
null
dennisowusuk/wav2vec2-large-xls-r-300m-turkish-colab
1
null
transformers
31,005
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3863 - Wer: 0.3095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8284 | 3.67 | 400 | 0.6782 | 0.6739 | | 0.4174 | 7.34 | 800 | 0.4524 | 0.4811 | | 0.2015 | 11.01 | 1200 | 0.4736 | 0.4311 | | 0.1371 | 14.68 | 1600 | 0.4254 | 0.3929 | | 0.0997 | 18.35 | 2000 | 0.4254 | 0.3636 | | 0.082 | 22.02 | 2400 | 0.3807 | 0.3474 | | 0.0665 | 25.69 | 2800 | 0.3987 | 0.3236 | | 0.0523 | 29.36 | 3200 | 0.3863 | 0.3095 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
Taekyoon/unicon_v0.5.4_alpha
fa5a69c7a1af08a3fc1b23f6976e3451313a4bcd
2022-03-28T06:35:18.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
Taekyoon
null
Taekyoon/unicon_v0.5.4_alpha
1
null
transformers
31,006
Entry not found
robvanderg/Sem-mmmBERT
4229c7a4efed00b452b76e531868998bbae446d3
2022-03-28T11:28:17.000Z
[ "pytorch", "bert", "feature-extraction", "multilingual", "dataset:SemEval 2022", "transformers", "STILT", "retraining", "multi-task learning" ]
feature-extraction
false
robvanderg
null
robvanderg/Sem-mmmBERT
1
null
transformers
31,007
--- language: - multilingual tags: - STILT - retraining - multi-task learning datasets: - SemEval 2022 --- ## Sem-mmmBERT This is the SemEval MaChAmp Multitask Multilingual BERT model. This model is retrained from mBERT (https://huggingface.co/bert-base-multilingual-cased). The retraining is done based on all SemEval 2022 tasks that are text based, and have annotation on the word, sentence or paragraph level. The retraining is done with MaChAmp (https://machamp-nlp.github.io/), a toolkit focusing on multi-task learning for NLP. More information can be found in the paper (which should be released when the SemEval proceedings are online).
scasutt/wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
615d71a17b2b45277988831913fcb002b4bbf469
2022-03-28T18:53:54.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
scasutt
null
scasutt/wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
1
null
transformers
31,008
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-53_toy_train_data_fast_10pct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53_toy_train_data_fast_10pct This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6983 - Wer: 0.5026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3619 | 1.05 | 250 | 3.4334 | 1.0 | | 3.0818 | 2.1 | 500 | 3.4914 | 1.0 | | 2.3245 | 3.15 | 750 | 1.6483 | 0.9486 | | 1.0233 | 4.2 | 1000 | 0.8817 | 0.7400 | | 0.7522 | 5.25 | 1250 | 0.7374 | 0.6529 | | 0.5343 | 6.3 | 1500 | 0.6972 | 0.6068 | | 0.4452 | 7.35 | 1750 | 0.6757 | 0.5740 | | 0.4275 | 8.4 | 2000 | 0.6789 | 0.5551 | | 0.3688 | 9.45 | 2250 | 0.6468 | 0.5394 | | 0.3363 | 10.5 | 2500 | 0.6798 | 0.5358 | | 0.3036 | 11.55 | 2750 | 0.6439 | 0.5265 | | 0.3173 | 12.6 | 3000 | 0.6898 | 0.5196 | | 0.2985 | 13.65 | 3250 | 0.6791 | 0.5169 | | 0.288 | 14.7 | 3500 | 0.6442 | 0.5090 | | 0.2673 | 15.75 | 3750 | 0.6984 | 0.5119 | | 0.2575 | 16.81 | 4000 | 0.7146 | 0.5084 | | 0.239 | 17.86 | 4250 | 0.6847 | 0.5040 | | 0.2266 | 18.91 | 4500 | 0.6900 | 0.5028 | | 0.22 | 19.96 | 4750 | 0.6983 | 0.5026 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
ai4bharat/MultiIndicHeadlineGenerationSS
36734e347d21b7a25db9354bd0e003a6f8bf40ec
2022-05-06T10:21:44.000Z
[ "pytorch", "mbart", "text2text-generation", "arxiv:2203.05437", "transformers", "multilingual", "nlp", "indicnlp", "autotrain_compatible" ]
text2text-generation
false
ai4bharat
null
ai4bharat/MultiIndicHeadlineGenerationSS
1
null
transformers
31,009
--- languages: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te tags: - multilingual - nlp - indicnlp widget: - text: वैश्विक व्यापार युद्ध की शिकार हुई तुर्की की मुद्रा लीरा के डूबने से अमेरिकी डॉलर के मुकाबले रुपया अब तक के न्यूनतम स्तर पर पहुंच गया। रुपये में रिकॉर्ड गिरावट से सोने की चमक में निखार नहीं आ सकी। वैश्विक बाजार में सोना करीब आठ महीने के निचले स्तर पर पहुंच गया तो घरेलू बाजार में यह करीब नौ महीने के निचले स्तर पर चला गया। वैश्विक मंदी की आशंका से वैश्विक बाजार में चांदी करीब ढाई साल और घरेलू बाजार में तकरीबन नौ महीने के निचले स्तर पर पहुंच गई। तुर्की की आर्थिक चिंता के कारण अमेरिकी डॉलर के मुकाबले रुपया कारोबार के दौरान 70.80 के स्तर तक गिर गया। यह इसका ऐतिहासिक रिकॉर्ड निम्न स्तर है। कमजोर रुपये से सोने की चमक बढऩे की उम्मीद की जा रही थी लेकिन वैश्विक बाजार में सोने की कीमत गिरकर 1,193.50 डॉलर प्रति औंस पहुंचने के कारण घरेलू बाजार में भी सोने की चमक फीकी पड़ गई। घरेलू बाजार में सोना गिरकर 29,655 रुपये प्रति 10 ग्राम पहुंच गया। घरेलू वायदा बाजार यानी एमसीएक्स पर सोना 29,700 के आस-पास कारोबार कर रहा है। देश में इस साल सोने की मांग में लगातार गिरावट देखने को मिल रही थी। अप्रैल-जून तिमाही में सोने का आयात 25 फीसदी से भी कम हुआ है। चालू महीने में सोने की मांग बढऩे की उम्मीद जगी थी लेकिन यह उम्मीद टूट सकती है क्योंकि दुनिया के सबसे बड़े गोल्ड फंड एसपीडीआर गोल्ड की होल्डिंग अप्रैल के बाद 10 फीसदी गिर चुकी है। इस समय यह पिछले ढाई साल के निचले स्तर पर है। इस साल वैश्विक बाजार में सोना करीब 8.5 फीसदी और घरेलू बाजार में 1.5 फीसदी टूट चुका है। सराफा मामलों के जानकार अनिल अग्रवाल कहते हैं कि वैश्विक हालात ऐसे हैं कि इस समय निवेशक डॉलर में पैसा लगा रहे हैं। इस कारण दूसरी मुद्रा और जिंस दबाव में हैं। हालांकि हालात यही रहे तो सोने में तेज सुधार भी देखने को मिलेगा। वैश्विक मंदी की बढ़ती आशंका का सबसे ज्यादा असर चांदी पर पड़ रहा है। वैश्विक बाजार में चांदी के दाम ढाई साल के निचले स्तर पर पहुंच चुके हैं। वैश्विक बाजार में चांदी की कीमत 15 डॉलर प्रति औंस के करीब चल रही है। इसके पहले अप्रैल 2016 में चांदी इस स्तर पर थी। वैश्विक बाजार में चांदी के दाम दो महीने पहले 18.13 डॉलर प्रति औंस पर चल रहे थे। चांदी कारोबारी राहुल मेहता कहते हैं कि सोना और मूल धातु में कमजोरी से चांदी पर दोहरा दबाव पड़ रहा है। वैश्विक बाजार का व्यापार युद्ध अब मुद्रा युद्ध में बदल गया है। वैश्विक अर्थव्यवस्था एक बार फिर मंदी की गिरफ्त में आ सकती है जिसके कारण औद्योगिक विकास भी प्रभावित होगा। यही वजह है कि चांदी की कीमतें लगातार लुढक़ रही हैं क्योंकि मांग में कमी आने की आशंका बढ़ती जा रही है। फिलहाल घरेलू बाजार में चांदी 37,825 रुपये प्रति किलोग्राम पर बिक रही है। तुर्की के आर्थिक संकट से एक बार फिर वैश्विक मंदी का डर है जिसका असर दुनियाभर के बाजारों पर देखा जा सकता है। इसने विश्व स्तर पर निवेशकों के रुख को प्रभावित किया है और वे डॉलर को एक सुरक्षित निवेश के तौर पर देख रहे हैं। आनंद राठी शेयर्स ऐंड स्टाक ब्रोकर्स में शोध विश्लेषक आर मारू ने कहा कि आयातकों की अधिक मांग से रुपये की विनिमय दर में गिरावट आई। उन्होंने कहा, तुर्की संकट को लेकर अनिश्चितता तथा डॉलर सूचकांक में तेजी को देखते हुए आयातक आक्रमक तरीके से डॉलर की लिवाली कर रहे हैं। दूसरी तरफ आरबीआई की तरफ से आक्रमक हस्तक्षेप न होने से भी रुपया नीचे आया। सरकार ने अमेरिकी डॉलर के मुकाबले रुपये के अब तक के न्यूनतम स्तर पर पहुंचने के लिए बाह्य कारकों को जिम्मेदार ठहराते हुए कहा कि इसमें चिंता की कोई बात नहीं है।</s><2hi> --- MultiIndicHeadlineGenerationSS is a multilingual, sequence-to-sequence pre-trained model focusing only on Indic languages. It currently supports 11 Indian languages and is finetuned on [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint. You can use MultiIndicHeadlineGenerationSS model to build natural language generation applications in Indian languages for tasks like summarization, headline generation and other summarization related tasks. Some salient features of the MultiIndicHeadlineGenerationSS are: <ul> <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li> <li> Trained on large Indic language corpora (1.316 million paragraphs and 5.9 million unique tokens) . </li> <li>Unlike ai4bharat/MultiIndicHeadlineGeneration each language is written in its own script so you do not need to perform any script mapping to/from Devanagari.</li> </ul> # Usage: ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input and outputs. The format below is how MultiIndicHeadlineGenerationSS was trained so the input should be "Paragraph </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("यूट्यूब या फेसबुक पर वीडियो देखते समय आप भी बफरिंग की वजह से परेशान होते हैं? इसका जवाब हां है तो जल्द ही आपकी सारी समस्या खत्म होने वाली है। दरअसल, टेलीकॉम मिनिस्टर अश्विनी वैष्णव ने पिछले सप्ताह कहा कि अगस्त के अंत तक हर-हाल में '5G' इंटरनेट लॉन्च हो जाएगा। उन्होंने यह भी कहा है कि स्पेक्ट्रम की बिक्री शुरू हो चुकी है और जून तक ये प्रोसेस खत्म होने की संभावना है।</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[58232, 76, 14514, 53, 5344, 10605, 1052, 680, 83, 648, . . . . , 12126, 725, 19, 13635, 17, 7, 64001, 64007]]) out = tokenizer("<2hi> 5G इंटरनेट का इंतजार हुआ खत्म:अगस्त तक देश में शुरू हो सकती है 5G सर्विस </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64007, 329, 1906, 15429, . . . . ,17, 329, 1906, 27241, 64001]]) model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:]) # For loss model_outputs.loss ## This is not label smoothed. # For logits model_outputs.logits # For generation. Pardon the messiness. Note the decoder_start_token_id. model.eval() # Set dropouts to zero model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=32, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # अगस्त के अंत तक '5G' इंटरनेट लॉन्च हो जाएगा : अश्विनी वैष्णव ``` # Benchmarks Scores on the `MultiIndicHeadlineGenerationSS` test sets are as follows: Language | Rouge-1 / Rouge-2 / Rouge-L ---------|---------------------------- as | 48.10 / 32.41 / 46.82 bn | 35.71 / 18.93 / 33.49 gu | 32.41 / 16.95 / 30.87 hi | 38.48 / 18.44 / 33.60 kn | 65.22 / 54.23 / 64.50 ml | 58.52 / 47.02 / 57.60 mr | 34.11 / 18.36 / 33.04 or | 24.83 / 11.00 / 23.74 pa | 45.15 / 27.71 / 42.12 ta | 47.15 / 31.09 / 45.72 te | 36.80 / 20.81 / 35.58 average | 42.41 / 27.00 / 40.64 # Contributors <ul> <li> Aman Kumar </li> <li> Prachi Sahu </li> <li> Himani Shrotriya </li> <li> Raj Dabre </li> <li> Anoop Kunchukuttan </li> <li> Ratish Puduppully </li> <li> Mitesh M. Khapra </li> <li> Pratyush Kumar </li> </ul> # Paper If you use MultiIndicHeadlineGeneration, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" } ```
jorge-henao/spanish-t5-small-disco-poetry
6a9baf81f3d08d23fe0144639655af32a30e993d
2022-03-28T21:26:45.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
jorge-henao
null
jorge-henao/spanish-t5-small-disco-poetry
1
null
transformers
31,010
--- license: mit tags: - generated_from_trainer model-index: - name: spanish-t5-small-disco-poetry results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-t5-small-disco-poetry This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1417 | 1.0 | 1284 | 0.0577 | | 0.0902 | 2.0 | 2568 | 0.0516 | | 0.0803 | 3.0 | 3852 | 0.0494 | | 0.0733 | 4.0 | 5136 | 0.0488 | | 0.0683 | 5.0 | 6420 | 0.0480 | | 0.067 | 6.0 | 7704 | 0.0477 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
tau/fewsion_4_1024_0.3_epoch1
0e9211c1681c7184d0f448c7f4f740ed0769ac07
2022-03-28T18:37:35.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tau
null
tau/fewsion_4_1024_0.3_epoch1
1
null
transformers
31,011
Entry not found
Vkt/first_model
f6e8f35abd92f017888eb529f6cdbf134792dcdb
2022-05-20T13:56:39.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Vkt
null
Vkt/first_model
1
null
transformers
31,012
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: test-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-model This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.0161 - Wer: 0.0141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.8062 | 0.29 | 400 | 2.0576 | 1.0 | | 0.9633 | 0.57 | 800 | 0.5862 | 0.6023 | | 0.6079 | 0.86 | 1200 | 0.4897 | 0.4824 | | 0.4993 | 1.14 | 1600 | 0.3823 | 0.3989 | | 0.4269 | 1.43 | 2000 | 0.3749 | 0.3761 | | 0.4049 | 1.72 | 2400 | 0.3501 | 0.3536 | | 0.3998 | 2.0 | 2800 | 0.3527 | 0.3381 | | 0.3172 | 2.29 | 3200 | 0.3188 | 0.3257 | | 0.3161 | 2.57 | 3600 | 0.3217 | 0.3185 | | 0.3213 | 2.86 | 4000 | 0.2988 | 0.3007 | | 0.3035 | 3.15 | 4400 | 0.3036 | 0.3288 | | 0.261 | 3.43 | 4800 | 0.3095 | 0.2947 | | 0.2639 | 3.72 | 5200 | 0.2818 | 0.2767 | | 0.2771 | 4.0 | 5600 | 0.2739 | 0.2812 | | 0.2343 | 4.29 | 6000 | 0.2820 | 0.2700 | | 0.2452 | 4.57 | 6400 | 0.2663 | 0.2697 | | 0.2344 | 4.86 | 6800 | 0.2679 | 0.2666 | | 0.2215 | 5.15 | 7200 | 0.2687 | 0.2571 | | 0.2032 | 5.43 | 7600 | 0.2791 | 0.2624 | | 0.2092 | 5.72 | 8000 | 0.2682 | 0.2616 | | 0.2122 | 6.0 | 8400 | 0.2770 | 0.2591 | | 0.1878 | 6.29 | 8800 | 0.2760 | 0.2584 | | 0.1884 | 6.58 | 9200 | 0.2641 | 0.2515 | | 0.194 | 6.86 | 9600 | 0.2500 | 0.2415 | | 0.175 | 7.15 | 10000 | 0.2635 | 0.2532 | | 0.1658 | 7.43 | 10400 | 0.2588 | 0.2371 | | 0.177 | 7.72 | 10800 | 0.2813 | 0.2493 | | 0.1786 | 8.01 | 11200 | 0.2628 | 0.2437 | | 0.1509 | 8.29 | 11600 | 0.2592 | 0.2453 | | 0.1597 | 8.58 | 12000 | 0.2737 | 0.2523 | | 0.1646 | 8.86 | 12400 | 0.2556 | 0.2436 | | 0.1587 | 9.15 | 12800 | 0.2669 | 0.2453 | | 0.1489 | 9.44 | 13200 | 0.2596 | 0.2353 | | 0.1468 | 9.72 | 13600 | 0.2620 | 0.2419 | | 0.1482 | 10.01 | 14000 | 0.2622 | 0.2334 | | 0.1285 | 10.29 | 14400 | 0.2531 | 0.2258 | | 0.1335 | 10.58 | 14800 | 0.2512 | 0.2273 | | 0.1335 | 10.86 | 15200 | 0.2475 | 0.2246 | | 0.132 | 11.15 | 15600 | 0.2575 | 0.2275 | | 0.1249 | 11.44 | 16000 | 0.2503 | 0.2223 | | 0.1229 | 11.72 | 16400 | 0.2817 | 0.2297 | | 0.1274 | 12.01 | 16800 | 0.2707 | 0.2211 | | 0.1115 | 12.29 | 17200 | 0.2647 | 0.2175 | | 0.117 | 12.58 | 17600 | 0.2501 | 0.2178 | | 0.1164 | 12.87 | 18000 | 0.2579 | 0.2216 | | 0.1085 | 13.15 | 18400 | 0.2636 | 0.2130 | | 0.1033 | 13.44 | 18800 | 0.2643 | 0.2184 | | 0.1066 | 13.72 | 19200 | 0.2519 | 0.2158 | | 0.1032 | 14.01 | 19600 | 0.2322 | 0.2082 | | 0.0981 | 14.3 | 20000 | 0.2613 | 0.2125 | | 0.1009 | 14.58 | 20400 | 0.2479 | 0.2076 | | 0.1 | 14.87 | 20800 | 0.2464 | 0.2058 | | 0.0886 | 15.15 | 21200 | 0.2595 | 0.2014 | | 0.0888 | 15.44 | 21600 | 0.2565 | 0.2048 | | 0.0916 | 15.73 | 22000 | 0.2470 | 0.2000 | | 0.095 | 16.01 | 22400 | 0.2539 | 0.1997 | | 0.0875 | 16.3 | 22800 | 0.2576 | 0.1995 | | 0.0833 | 16.58 | 23200 | 0.2514 | 0.1990 | | 0.0813 | 16.87 | 23600 | 0.2522 | 0.2020 | | 0.0845 | 17.16 | 24000 | 0.2522 | 0.2045 | | 0.0879 | 17.44 | 24400 | 0.2629 | 0.2183 | | 0.0854 | 17.73 | 24800 | 0.2464 | 0.2000 | | 0.0795 | 18.01 | 25200 | 0.2526 | 0.2078 | | 0.075 | 18.3 | 25600 | 0.2519 | 0.1971 | | 0.0724 | 18.58 | 26000 | 0.2551 | 0.1965 | | 0.0735 | 18.87 | 26400 | 0.2536 | 0.1934 | | 0.0735 | 19.16 | 26800 | 0.2504 | 0.1916 | | 0.0676 | 19.44 | 27200 | 0.2532 | 0.1884 | | 0.0687 | 19.73 | 27600 | 0.2498 | 0.1849 | | 0.0652 | 20.01 | 28000 | 0.2490 | 0.1847 | | 0.0617 | 20.3 | 28400 | 0.2547 | 0.1899 | | 0.0627 | 20.59 | 28800 | 0.2509 | 0.1834 | | 0.0639 | 20.87 | 29200 | 0.2472 | 0.1812 | | 0.0611 | 21.16 | 29600 | 0.2486 | 0.1827 | | 0.0559 | 21.44 | 30000 | 0.2530 | 0.1825 | | 0.0564 | 21.73 | 30400 | 0.2484 | 0.1785 | | 0.0593 | 22.02 | 30800 | 0.2425 | 0.1781 | | 0.0517 | 22.3 | 31200 | 0.2613 | 0.1775 | | 0.0528 | 22.59 | 31600 | 0.2517 | 0.1759 | | 0.0556 | 22.87 | 32000 | 0.2494 | 0.1811 | | 0.0507 | 23.16 | 32400 | 0.2522 | 0.1761 | | 0.0485 | 23.45 | 32800 | 0.2344 | 0.1717 | | 0.0504 | 23.73 | 33200 | 0.2458 | 0.1772 | | 0.0485 | 24.02 | 33600 | 0.2497 | 0.1748 | | 0.0436 | 24.3 | 34000 | 0.2405 | 0.1738 | | 0.0468 | 24.59 | 34400 | 0.2446 | 0.1735 | | 0.0443 | 24.87 | 34800 | 0.2514 | 0.1709 | | 0.0417 | 25.16 | 35200 | 0.2515 | 0.1711 | | 0.0399 | 25.45 | 35600 | 0.2452 | 0.1664 | | 0.0416 | 25.73 | 36000 | 0.2438 | 0.1664 | | 0.0412 | 26.02 | 36400 | 0.2457 | 0.1662 | | 0.0406 | 26.3 | 36800 | 0.2475 | 0.1659 | | 0.0376 | 26.59 | 37200 | 0.2454 | 0.1682 | | 0.0365 | 26.88 | 37600 | 0.2511 | 0.1650 | | 0.0355 | 27.16 | 38000 | 0.2518 | 0.1633 | | 0.032 | 27.45 | 38400 | 0.2479 | 0.1604 | | 0.0348 | 27.73 | 38800 | 0.2391 | 0.1599 | | 0.0331 | 28.02 | 39200 | 0.2417 | 0.1617 | | 0.0349 | 28.31 | 39600 | 0.2358 | 0.1590 | | 0.0347 | 28.59 | 40000 | 0.2388 | 0.1582 | | 0.0325 | 28.88 | 40400 | 0.2412 | 0.1564 | | 0.0332 | 29.16 | 40800 | 0.2390 | 0.1545 | | 0.0613 | 29.45 | 41200 | 0.0167 | 0.0141 | | 0.0563 | 29.74 | 41600 | 0.0161 | 0.0141 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.1+cu111 - Datasets 2.2.1 - Tokenizers 0.12.1
MU-NLPC/CzeGPT-2_headline_generator
9fdfe42e4f968873a23b2179855cb70c012fc6cf
2022-05-17T15:49:13.000Z
[ "pytorch", "gpt2", "text-generation", "cs", "dataset:csTenTen17", "transformers", "license:cc-by-nc-sa-4.0" ]
text-generation
false
MU-NLPC
null
MU-NLPC/CzeGPT-2_headline_generator
1
null
transformers
31,013
--- language: cs license: cc-by-nc-sa-4.0 datasets: - csTenTen17 --- # CzeGPT-2 headline generator CzeGPT-2_headline_generator is a Czech summarizer built upon the <a href="https://huggingface.co/MU-NLPC/CzeGPT-2">CzeGPT-2</a> model. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124M trainable parameters. It was fine-tuned and evaluated on the <a href="https://aclanthology.org/L18-1551.pdf">SumeCzech</a> summarization dataset containing about 1M Czech news articles. ## Tokenizer Along, we also provide a Czech trained tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase and fine-tuning. It is the byte-level BPE tokenizer as used in the original GPT-2 paper. ## Training results The model was evaluated on the *test* and *ood-test* partitions of the SumeCzech dataset and compared to the best summarizers yet evaluated on this benchmark (the results taken from <a href="https://ufal.mff.cuni.cz/sumeczech">here</a>). The headline generator is trained to decide itself when to stop (generate an <|endoftext|> token). If you want a variable summary length, refer to our <a href="https://huggingface.co/MU-NLPC/CzeGPT-2_summarizer">summary generator</a> We manage to exceed current state-of-the art on all standard metrics. Test set | Model | ROUGE<sub>RAW</sub>-1 | ROUGE<sub>RAW</sub>-2 | ROUGE<sub>RAW</sub>-L | | :---: | :------: | :-----: | :-----: | | CzeGPT-2 | **17.3**/**17.0**/**16.7** | **4.4**/**4.3**/**4.2** | **15.5**/**15.2**/**14.9**| | First | 7.4/13.5/8.9 | 1.1/2.2/1.3 | 6.5/11.7/7.7 | | TextRank | 6.0/16.5/8.3 | 0.8/2.3/1.1 | 5.0/13.8/6.9 | |Tensor2Tensor | 8.8/7.0/7.5 | 0.8/0.6/0.7 | 8.1/6.5/7.0 | |NE Density | 6.6/10.7/7.3 | 0.8/1.4/0.9 | 5.9/9.4/6.4 | |Seq2Seq | 16.1/14.1/14.6 | 2.5/2.1/2.2 | 14.6/12.8/13.2| |Seq2Seq<sub>NER</sub> | 16.2/14.1/14.7 | 2.5/2.1/2.2 | 14.7/12.8/13.3| OOD test set | Model | ROUGE<sub>RAW</sub>-1 | ROUGE<sub>RAW</sub>-2 | ROUGE<sub>RAW</sub>-L | | :---: | :------: | :-----: | :-----: | |CzeGPT-2 | **17.9**/**17.6**/**17.2** | **5.9**/**5.7**/**5.5** | **16.4**/**16.2**/**15.8** | |First | 6.7/13.6/8.3 | 1.3/2.8/1.6 | 5.9/12.0/7.4 | |TextRank | 5.8/16.9/8.1 | 1.1/3.4/1.5 | 5.0/14.5/6.9 | |Tensor2Tensor | 6.3/5.1/5.5 | 0.5/0.4/0.4 | 5.9/4.8/5.1 | |NE Density | 6.3/11.4/7.1 | 1.3/2.3/1.4 | 5.7/10.2/6.3 | |Seq2Seq | 13.1/11.8/12.0 | 2.0/1.7/1.8 | 12.1/11.0/11.2 | |Seq2SeqNER | 16.2/14.1/14.7 | 2.5/2.1/2.2 | 14.7/12.8/13.3 | The numbers in the tables denote *precision/recall/F1-score* ## Error Analysis As we think the current standard ROUGE<sub>RAW</sub> metric is not suitable enough for the summarization task (even though it is the best we have at the time), we performed also a manual error analysis of the generated summaries using human annotators. You can find more about the methodology and results in our paper referenced at the bottom of this card. ## Running the predictions The repository includes a simple Jupyter Notebook that can help with first steps when using the model. ## Summary generator See also our model fine-tuned for <a href="https://huggingface.co/MU-NLPC/CzeGPT-2_summarizer">summary generation task</a>. ## How to cite @unpublished{hajek_horak2022,<br> author = "Adam Hájek and Aleš Horák",<br> title = "CzeGPT-2 – New Model for Czech Summarization Task",<br> note = "preprint available at \url{https://openreview.net/forum?id=H43eQtxZefq}",<br> month = "3",<br> year = "2022",<br> }
jorge-henao/gpt2-small-spanish-disco-poetry-15
5d6dbbd46cd5a4e798d9c0e093408607fc57d1fe
2022-03-29T05:17:49.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
jorge-henao
null
jorge-henao/gpt2-small-spanish-disco-poetry-15
1
null
transformers
31,014
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: gpt2-small-spanish-disco-poetry-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-small-spanish-disco-poetry-15 This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.2465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Rishav-hub/xlm-roberta-base-finetuned-panx-de
abda1d0f55df44356c91a242483207feb0af04a5
2022-03-29T11:05:37.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
Rishav-hub
null
Rishav-hub/xlm-roberta-base-finetuned-panx-de
1
null
transformers
31,015
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8591260810195721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.257 | 1.0 | 525 | 0.1512 | 0.8302 | | 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 | | 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Serranito/wav2vec2-base-timit-demo-colab
4000767c5e76a082687820778a8ecf2ccf054f43
2022-04-18T11:32:59.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Serranito
null
Serranito/wav2vec2-base-timit-demo-colab
1
null
transformers
31,016
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4283 - eval_wer: 0.3847 - eval_runtime: 133.4799 - eval_samples_per_second: 12.586 - eval_steps_per_second: 1.573 - epoch: 12.0 - step: 1500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
BeamBee/DialoGPT-small-LavenzaNumTwo
1f9ee4135cb45060b1e64961ec5ec08bf6f878c3
2022-03-29T16:30:52.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
BeamBee
null
BeamBee/DialoGPT-small-LavenzaNumTwo
1
null
transformers
31,017
--- tags: - conversational --- # LavenzaNumTwo DialoGPT Model
DrishtiSharma/poem-gen-spanish-t5-small-v6
d8ef64d4f3f3dea855461eacb967d91dd415fc4f
2022-03-29T23:45:09.000Z
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
DrishtiSharma
null
DrishtiSharma/poem-gen-spanish-t5-small-v6
1
null
transformers
31,018
--- license: mit tags: - generated_from_trainer model-index: - name: poem-gen-spanish-t5-small-v6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-spanish-t5-small-v6 This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.8551 | 0.73 | 30000 | 2.9296 | | 2.6961 | 1.46 | 60000 | 2.9005 | | 2.5756 | 2.19 | 90000 | 2.8786 | | 2.5095 | 2.93 | 120000 | 2.8621 | | 2.4061 | 3.66 | 150000 | 2.8830 | | 2.3161 | 4.39 | 180000 | 2.8865 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
jfreiwa/asr-crdnn-german
e591802bd809b91d95332aa50e93c9b364a47955
2022-05-23T21:06:32.000Z
[ "de", "arxiv:2106.04624", "speechbrain", "automatic-speech-recognition", "CTC", "Attention", "pytorch", "license:cc-by-sa-4.0" ]
automatic-speech-recognition
false
jfreiwa
null
jfreiwa/asr-crdnn-german
1
null
speechbrain
31,019
--- license: cc-by-sa-4.0 language: "de" thumbnail: tags: - automatic-speech-recognition - CTC - Attention - pytorch - speechbrain metrics: - wer --- # German ASR This model is trained on the Mozilla Common Voice 6.1, the Spoken Wikipedia Corpus and the m-ailabs corpus. - https://nats.gitlab.io/swc/ - https://commonvoice.mozilla.org/de/datasets - https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset/ We do not provide a language model. You can find the training codes [here](https://github.com/rub-ksv/asr-crdnn-german). # Performance This model has a WER of 7.24%. (You can find an updated version of this model here: https://huggingface.co/jfreiwa/asr-crdnn-german-umlaute) # Model application ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read the tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ## Using the model ``` from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="jfreiwa/asr-crdnn-german", savedir="pretrained_models/asr-crdnn-german") asr_model.transcribe_file("jfreiwa/asr-crdnn-german/example-de.wav") ``` ## Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. # Limitations We do not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` # **Citing our paper** Please, cite our paper, when you use this model in your research. ```bibtex @inproceedings{freiwald2022, author={J. Freiwald and P. Pracht and S. Gergen and D. Kolossa}, title={Open-Source End-To-End Learning for Privacy-Preserving German {ASR}}, year=2022, booktitle={DAGA 2022} } ``` # Acknowledgements This work was funded by the German Federal Ministry of Education and Research (BMBF) within the “Innovations for Tomorrow’s Production, Services, and Work” Program (02L19C200), a project that is implemented by the Project Management Agency Karlsruhe (PTKA). The authors are responsible for the content of this publication.
negfir/bert_uncased_L-12_H-768_A-12
54ab1afa89403e9a1cd3754e71b0f3b19dbce95b
2022-04-07T17:20:01.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-12_H-768_A-12
1
null
transformers
31,020
Entry not found
BigSalmon/PointsOneSent
515fe1dbd7ffa293d603aaf56449aadf4c94e50d
2022-03-29T21:26:49.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/PointsOneSent
1
null
transformers
31,021
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsOneSent") model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsOneSent") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - ``` It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
negfir/bert_uncased_L-12_H-256_A-4
81899a3bf624d1a3b185968124a24187df6b4715
2022-04-05T22:30:47.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-12_H-256_A-4
1
null
transformers
31,022
Entry not found
negfir/bert_uncased_L-12_H-128_A-2
dc2d5be31d67844160de6b63853b3effbb25e90d
2022-04-05T22:43:38.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-12_H-128_A-2
1
null
transformers
31,023
Entry not found
princeton-nlp/CoFi-SQuAD-s60
9850f111e653a096833845a8575c4152d33bbe66
2022-05-01T01:15:01.000Z
[ "pytorch", "bert", "question-answering", "arxiv:2204.00408", "transformers", "autotrain_compatible" ]
question-answering
false
princeton-nlp
null
princeton-nlp/CoFi-SQuAD-s60
1
null
transformers
31,024
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset SQuAD 1.1. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
negfir/bert_uncased_L-8_H-256_A-4
bfc3572e589e7dd91436d3b665e4aa9485d15f4e
2022-04-06T01:54:08.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-8_H-256_A-4
1
null
transformers
31,025
Entry not found
negfir/bert_uncased_L-8_H-128_A-2
6c4457399566ef728369bd7bb9bb53a02625ca84
2022-04-06T02:04:15.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-8_H-128_A-2
1
null
transformers
31,026
Entry not found
negfir/bert_uncased_L-6_H-512_A-8
2af6904f5f9ffc525846afba99d026658a214447
2022-04-06T03:00:29.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-6_H-512_A-8
1
null
transformers
31,027
Entry not found
negfir/bert_uncased_L-6_H-256_A-4
cc5bacf51fe54526fc327125128565b84b2f74d4
2022-04-06T03:12:22.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-6_H-256_A-4
1
null
transformers
31,028
Entry not found
negfir/bert_uncased_L-2_H-512_A-8
4c20f09c13c5a558842953f2ad43d7ebaee3d31a
2022-04-06T04:55:26.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-2_H-512_A-8
1
null
transformers
31,029
Entry not found
202015004/MY_st1_training_shreya_fixed_27_march_labled-decoded_level2_re
d4509c060e29dc2f3d2b5a7d0525a71c0a5d0679
2022-03-30T10:38:42.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
202015004
null
202015004/MY_st1_training_shreya_fixed_27_march_labled-decoded_level2_re
1
null
transformers
31,030
Entry not found
vesteinn/ScandiBERT-NER
0a4146750451865d8c7b88a1aeec967febdd096a
2022-03-30T09:16:03.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
vesteinn
null
vesteinn/ScandiBERT-NER
1
null
transformers
31,031
Entry not found
negfir/bert_uncased_L-10_H-256_A-4
eb35edd0e6758e8abc81a19bdd2fa8c2b29ed271
2022-04-06T00:20:07.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
negfir
null
negfir/bert_uncased_L-10_H-256_A-4
1
null
transformers
31,032
Entry not found
Mads/xlsr-0330
3534645b44f7a85db4d4b18830fef648e15b6a45
2022-03-31T07:24:58.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
Mads
null
Mads/xlsr-0330
1
null
transformers
31,033
Entry not found
bemich/DialoGPT-small-GeorgeCostanza
3dd7d7a35037c96e094ec3ea83528866c16df660
2022-03-31T03:12:47.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
bemich
null
bemich/DialoGPT-small-GeorgeCostanza
1
null
transformers
31,034
--- tags: - conversational --- # George Costanza DialoGPT model
Kuray107/librispeech-100h-supervised-aug
746f11ff47e2a09da984fea126a996f37d13ff23
2022-04-05T12:57:44.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Kuray107
null
Kuray107/librispeech-100h-supervised-aug
1
null
transformers
31,035
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: librispeech-100h-supervised-aug results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # librispeech-100h-supervised-aug This model is a fine-tuned version of [Kuray107/librispeech-5h-supervised](https://huggingface.co/Kuray107/librispeech-5h-supervised) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0776 - Wer: 0.0327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3099 | 1.12 | 1000 | 0.0748 | 0.0521 | | 0.1873 | 2.24 | 2000 | 0.0674 | 0.0440 | | 0.146 | 3.36 | 3000 | 0.0671 | 0.0406 | | 0.1233 | 4.48 | 4000 | 0.0619 | 0.0381 | | 0.1098 | 5.61 | 5000 | 0.0618 | 0.0381 | | 0.0985 | 6.73 | 6000 | 0.0590 | 0.0355 | | 0.0907 | 7.85 | 7000 | 0.0659 | 0.0352 | | 0.0837 | 8.97 | 8000 | 0.0679 | 0.0359 | | 0.0762 | 10.09 | 9000 | 0.0701 | 0.0349 | | 0.0707 | 11.21 | 10000 | 0.0715 | 0.0348 | | 0.0666 | 12.33 | 11000 | 0.0719 | 0.0346 | | 0.0631 | 13.45 | 12000 | 0.0746 | 0.0347 | | 0.0593 | 14.57 | 13000 | 0.0757 | 0.0340 | | 0.0554 | 15.7 | 14000 | 0.0746 | 0.0337 | | 0.053 | 16.82 | 15000 | 0.0757 | 0.0331 | | 0.0525 | 17.94 | 16000 | 0.0752 | 0.0327 | | 0.0514 | 19.06 | 17000 | 0.0776 | 0.0327 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
nikhil6041/wav2vec2-commonvoice-hindi
a80b62165ebc9aea1b6344dcfde5ba1926e8fa9f
2022-04-02T04:48:26.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
nikhil6041
null
nikhil6041/wav2vec2-commonvoice-hindi
1
null
transformers
31,036
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-commonvoice-hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-commonvoice-hindi This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9825 - Wer: 0.6763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 20.0 | 100 | 0.8801 | 0.6754 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
mimi/test_KE-T5
8449155d180df3abc383e186924a1edb7c296cbf
2022-04-07T19:51:54.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
mimi
null
mimi/test_KE-T5
1
null
transformers
31,037
Entry not found
AnonymousSub/news-pretrain-roberta
b49be0bb8ee97bbfe2cae9f777f11dba0f2681a9
2022-03-31T07:52:33.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
AnonymousSub
null
AnonymousSub/news-pretrain-roberta
1
null
transformers
31,038
Entry not found
AnonymousSub/news-pretrain-bert
edfd332f99bdfb9c6516386e0030fe69cf8a2498
2022-03-31T07:53:34.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
AnonymousSub
null
AnonymousSub/news-pretrain-bert
1
null
transformers
31,039
Entry not found
r1ck/bi-encoder-vi_wikiqa
f8e29f915e6e96f90feecfbb81bc935678f63fcd
2022-03-31T08:39:47.000Z
[ "pytorch", "roberta", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
r1ck
null
r1ck/bi-encoder-vi_wikiqa
1
null
sentence-transformers
31,040
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 8625 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 2500, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 1e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
scasutt/wav2vec2-base_toy_train_data_random_low_pass
e53f3505e9b10c9d6fc6a12a566514729758533e
2022-03-31T10:42:02.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
scasutt
null
scasutt/wav2vec2-base_toy_train_data_random_low_pass
1
null
transformers
31,041
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base_toy_train_data_random_low_pass results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base_toy_train_data_random_low_pass This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3227 - Wer: 0.7288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0795 | 2.1 | 500 | 3.2227 | 0.9982 | | 1.21 | 4.2 | 1000 | 1.3713 | 0.8879 | | 0.742 | 6.3 | 1500 | 1.2660 | 0.8296 | | 0.5877 | 8.4 | 2000 | 1.2921 | 0.7794 | | 0.4823 | 10.5 | 2500 | 1.2899 | 0.7565 | | 0.4036 | 12.6 | 3000 | 1.3486 | 0.7494 | | 0.391 | 14.7 | 3500 | 1.2701 | 0.7466 | | 0.3426 | 16.81 | 4000 | 1.3570 | 0.7279 | | 0.3015 | 18.91 | 4500 | 1.3227 | 0.7288 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
Khalsuu/2nd-wav2vec2-l-xls-r-300m-turkish-test
0aa193770df7e4fc92a7258a05036e6c81728dfe
2022-03-31T12:09:32.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Khalsuu
null
Khalsuu/2nd-wav2vec2-l-xls-r-300m-turkish-test
1
null
transformers
31,042
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: 2nd-wav2vec2-l-xls-r-300m-turkish-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2nd-wav2vec2-l-xls-r-300m-turkish-test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6019 - Wer: 0.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0522 | 3.67 | 400 | 0.7773 | 0.7296 | | 0.5369 | 7.34 | 800 | 0.6282 | 0.5888 | | 0.276 | 11.01 | 1200 | 0.5998 | 0.5330 | | 0.1725 | 14.68 | 1600 | 0.5859 | 0.4908 | | 0.1177 | 18.35 | 2000 | 0.6019 | 0.4444 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
YiTian/wav2vec2-common_voice-tr-demo
c0ddbaab434fdc2090e3bd1650af8c28fd96db2e
2022-03-31T11:40:04.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "tr", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
YiTian
null
YiTian/wav2vec2-common_voice-tr-demo
1
null
transformers
31,043
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tr-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 2.9841 - Wer: 0.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 7.14 | 100 | 3.6689 | 1.0 | | No log | 14.29 | 200 | 3.0280 | 0.9999 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.0 - Datasets 1.18.0 - Tokenizers 0.11.6
202015004/Teacher_model_31_march
5e99825971eca3a7c34f2dbddf2ab57b79a3ed9e
2022-03-31T20:15:44.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
202015004
null
202015004/Teacher_model_31_march
1
null
transformers
31,044
Entry not found
creynier/wav2vec2-base-swbd-turn-eos-half
1f60aed7ccfda5736f3cd233c4de203e66512101
2022-04-14T15:46:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
creynier
null
creynier/wav2vec2-base-swbd-turn-eos-half
1
null
transformers
31,045
Entry not found
huggingtweets/timdingmanlive
000a447683e43c5211ed8e02f707c044e718246d
2022-03-31T14:30:05.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/timdingmanlive
1
null
transformers
31,046
--- language: en thumbnail: http://www.huggingtweets.com/timdingmanlive/1648736999131/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2844974270/7bb6450b90b65f8712d9433b8d5e1971_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tim Dingman</div> <div style="text-align: center; font-size: 14px;">@timdingmanlive</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tim Dingman. | Data | Tim Dingman | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 555 | | Short tweets | 138 | | Tweets kept | 2547 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7yvdv2z7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @timdingmanlive's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/311pu3zj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/timdingmanlive') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
benwoodyear/t5-small-cryptic-crosswords
81faa7a48ccf6b14a9a19bd9871aca02a5d6768c
2022-03-31T21:46:31.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
benwoodyear
null
benwoodyear/t5-small-cryptic-crosswords
1
null
transformers
31,047
Entry not found
benwoodyear/t5-large-cryptic-crosswords
9ed147d1e7b6b559acf21f96fe3138fd2640b896
2022-03-31T21:57:54.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
benwoodyear
null
benwoodyear/t5-large-cryptic-crosswords
1
null
transformers
31,048
Entry not found
Ramil/wav2vec2-large-xlsr-300m-turkish-lm
4a1534d987d08ad9e43af5f9e37e0bbedd9321d1
2022-04-01T00:10:57.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
Ramil
null
Ramil/wav2vec2-large-xlsr-300m-turkish-lm
1
null
transformers
31,049
Entry not found
Teyronebigdick/DialoGPT-small-harrypotter
5d645c327752430fa95a4c385ccec5d222b54cf2
2022-04-01T00:11:48.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Teyronebigdick
null
Teyronebigdick/DialoGPT-small-harrypotter
1
null
transformers
31,050
--- tags: - conversational --- # Harry Potter Model
Splend1dchan/t5lephone-small-squad1024
e6f9e358ba1f11dee6c29b1a577abd30228c5781
2022-04-06T12:36:43.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Splend1dchan
null
Splend1dchan/t5lephone-small-squad1024
1
null
transformers
31,051
Entry not found
FrankCorrigan/results
e53474f877b87c3371df484f3b87a0c8a887ced5
2022-04-01T18:15:40.000Z
[ "pytorch", "bart", "text2text-generation", "dataset:samsum", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
FrankCorrigan
null
FrankCorrigan/results
1
null
transformers
31,052
--- license: apache-2.0 tags: - generated_from_trainer datasets: - samsum model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [linydub/bart-large-samsum](https://huggingface.co/linydub/bart-large-samsum) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.0158 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 0.9563 | | No log | 2.0 | 2 | 0.9877 | | No log | 3.0 | 3 | 1.0158 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0 - Datasets 2.0.0 - Tokenizers 0.11.6
dchung117/distilbert-base-uncased-finetuned-squad-d5716d28
e26569e115efcae2ff91ca049ff9bd442299e6d7
2022-04-01T02:02:28.000Z
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
dchung117
null
dchung117/distilbert-base-uncased-finetuned-squad-d5716d28
1
null
transformers
31,053
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Sammith/DialoGPT-small-miachael
e8089df82645ff99fe98b782612209b858cb60c9
2022-04-01T04:34:10.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Sammith
null
Sammith/DialoGPT-small-miachael
1
null
transformers
31,054
--- tags: - conversational --- # my chatbot model
202015004/Teacher_model_31_march1
9ded0aaeadb65563fbb3f066c14357b2f25cbe1f
2022-04-01T07:47:52.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
202015004
null
202015004/Teacher_model_31_march1
1
null
transformers
31,055
Entry not found
Yingda/myfirstmodel
3ecfd82613e3721931c70daf08edb64daec4114e
2022-05-23T06:43:37.000Z
[ "pytorch", "albert", "text-generation", "transformers", "license:apache-2.0", "token-classification" ]
token-classification
false
Yingda
null
Yingda/myfirstmodel
1
null
transformers
31,056
--- license: apache-2.0 pipeline_tag: token-classification --- This is my model card
Nxtxn01/DialoGPT-small-harrypotter
b0e48ec3170eb280321b6b407314e97e337ef8e6
2022-04-01T08:13:15.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Nxtxn01
null
Nxtxn01/DialoGPT-small-harrypotter
1
null
transformers
31,057
--- tags: - conversational --- # Harry Potter DialoGPT Model
birgermoell/psst-base-rep
a7a9982f8943b094d6cca5a203e33955533f3e6a
2022-04-01T12:02:45.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
birgermoell
null
birgermoell/psst-base-rep
1
null
transformers
31,058
The model is a reproduction of the baseline trained with Wav2vec2-small on PSST pssteval INFO: ASR metrics for split `valid` FER: 10.4% PER: 23.1%
202015004/Teacher_model_1_april
7dc241930bc66d8ce0ae87ae7624b1106fdfbd72
2022-04-01T10:24:17.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
202015004
null
202015004/Teacher_model_1_april
1
null
transformers
31,059
Entry not found
nn007/wikineural-multilingual-ner
6c8d81d181ece0c4679423e8937c7a25605663d3
2022-04-11T17:44:24.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
nn007
null
nn007/wikineural-multilingual-ner
1
null
transformers
31,060
Entry not found
RichardWang/test
6549220ab7bc55871d4e3707b1d49cb36bc9aa75
2022-05-08T03:02:57.000Z
[ "pytorch", "tsp", "transformers" ]
null
false
RichardWang
null
RichardWang/test
1
null
transformers
31,061
Entry not found
scasutt/wav2vec2-large-xlsr-53_toy_train_data_random_high_pass
a900587cd450d762165486a2ff9c93f3b3f81100
2022-04-01T17:35:45.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
scasutt
null
scasutt/wav2vec2-large-xlsr-53_toy_train_data_random_high_pass
1
null
transformers
31,062
Entry not found
deepakvk/albert-base-v2-squad2
f41b21b0dea1ac3e26eebb46a9c4f435aa69d21c
2022-04-02T13:36:31.000Z
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
deepakvk
null
deepakvk/albert-base-v2-squad2
1
null
transformers
31,063
Entry not found
202015004/Teacher_model_1_april_proper
acbf12912e5d8dc65d4020df2a86c064a4ae297c
2022-04-02T07:10:23.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
202015004
null
202015004/Teacher_model_1_april_proper
1
null
transformers
31,064
Entry not found
AnonymousSub/fpdm_triplet_bert_FT_newsqa
9934be36d282b9455d9176c261f3e34b4e93c83d
2022-04-01T21:52:03.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
AnonymousSub
null
AnonymousSub/fpdm_triplet_bert_FT_newsqa
1
null
transformers
31,065
Entry not found
AnonymousSub/news_pretrain_bert_FT_newsqa
e3de8eb805039b38af637e180d22d79db7bdedaa
2022-04-01T21:54:05.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
AnonymousSub
null
AnonymousSub/news_pretrain_bert_FT_newsqa
1
null
transformers
31,066
Entry not found
AnonymousSub/bert_FT_newsqa
78b86586d8e859291c05b9302712e02418445a0f
2022-04-01T21:56:34.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
AnonymousSub
null
AnonymousSub/bert_FT_newsqa
1
null
transformers
31,067
Entry not found
AnonymousSub/roberta_FT_newsqa
a66a6877190145cd17cea3ee8d04076762d2090a
2022-04-01T21:57:25.000Z
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
AnonymousSub
null
AnonymousSub/roberta_FT_newsqa
1
null
transformers
31,068
Entry not found
clisi2000/codeparrot
a941818d5f3c277743aa3b57088e5ccc4a19022a
2022-04-02T15:46:55.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
clisi2000
null
clisi2000/codeparrot
1
null
transformers
31,069
Entry not found
clisi2000/codeparrot-small
535ed67457f0231697ea5dafd72a8a7052f4edca
2022-04-03T02:24:23.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
clisi2000
null
clisi2000/codeparrot-small
1
null
transformers
31,070
Entry not found
jingwei001/distilgpt2-finetuned-wikitext2
d8143063a58917aec8f84eaf2dddca23ccfcc226
2022-04-02T14:40:16.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
jingwei001
null
jingwei001/distilgpt2-finetuned-wikitext2
1
null
transformers
31,071
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7607 | 1.0 | 2334 | 3.6664 | | 3.6323 | 2.0 | 4668 | 3.6461 | | 3.6075 | 3.0 | 7002 | 3.6432 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
juancavallotti/t5-base-es-en-fr-de
0db20e29e4eb720c042f6f455fae8ae18f545403
2022-04-02T07:34:27.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
juancavallotti
null
juancavallotti/t5-base-es-en-fr-de
1
null
transformers
31,072
Entry not found
yusufani/trclip-vitl14-e10
f49fe60a6a4ffa22aced1297a7f11a0289da64ad
2022-06-26T10:08:09.000Z
[ "pytorch", "trclip", "transformers", "license:afl-3.0" ]
null
false
yusufani
null
yusufani/trclip-vitl14-e10
1
1
transformers
31,073
--- license: afl-3.0 ---
aiface/test
f8cec0b68618455ab2c6c882931ede815e675707
2022-04-02T23:18:32.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
aiface
null
aiface/test
1
null
transformers
31,074
Entry not found
vocab-transformers/distilbert-mlm-250k
424f22300a1198337549f4de8e515e09c4bf019d
2022-04-02T21:10:59.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
vocab-transformers
null
vocab-transformers/distilbert-mlm-250k
1
null
transformers
31,075
distilbert-base-uncased trained for 250K steps with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
notexist/ttte
85a1bd31db50cdfd821833bc5577e4f4da9390b4
2022-04-03T01:28:49.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
notexist
null
notexist/ttte
1
null
transformers
31,076
Entry not found
ml6team/xlm-roberta-base-nl-emoji-ner
61e289ecd6fcc60fe2b88baa92d90042686cb34a
2022-04-20T09:21:12.000Z
[ "pytorch", "xlm-roberta", "token-classification", "nl", "transformers", "sequence-tagger-model", "autotrain_compatible" ]
token-classification
false
ml6team
null
ml6team/xlm-roberta-base-nl-emoji-ner
1
1
transformers
31,077
--- language: nl tags: - token-classification - sequence-tagger-model --- # Goal This model can be used to add emoji to an input text. To accomplish this, we framed the problem as a token-classification problem, predicting the emoji that should follow a certain word/token as an entity. The accompanying demo, which includes all the pre- and postprocessing needed can be found [here](https://huggingface.co/spaces/ml6team/emoji_predictor). For the moment, this only works for Dutch texts. # Dataset For this model, we scraped about 1000 unique tweets per emoji we support: ['😨', '😥', '😍', '😠', '🤯', '😄', '🍾', '🚗', '☕', '💰'] Which could look like this: ``` Wow 😍😍, what a cool car 🚗🚗! Omg, I hate mondays 😠... I need a drink 🍾 ``` After some processing, we can reposition this in a more known NER format: | Word | Label | |-------|-----| | Wow | B-😍| | , | O | | what | O | | a | O | | cool | O | | car | O | | ! | B-🚗| Which can then be leveraged for training a token classification model. Unfortunately, Terms of Service prohibit us from sharing the original dataset. # Training The model was trained for 4 epochs.
kumachan/dummy-model
322dc9af940e5f3288e1a5b4aac0fdf7b0fc0c43
2022-04-03T09:53:08.000Z
[ "pytorch", "camembert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
kumachan
null
kumachan/dummy-model
1
null
transformers
31,078
Entry not found
morahil/wav2vec2-large-xls-r-300m-hindi
b38105af6ae4f748b9ad21efc7c5120c261ad9fe
2022-04-03T17:28:16.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
morahil
null
morahil/wav2vec2-large-xls-r-300m-hindi
1
null
transformers
31,079
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
mustapha/wav2vec_iemocap_session_1
21522b1c192c8d50d445f778201b8706747c5029
2022-04-06T17:56:28.000Z
[ "pytorch" ]
null
false
mustapha
null
mustapha/wav2vec_iemocap_session_1
1
1
null
31,080
Entry not found
BigSalmon/InformalToFormalLincoln34
06772a5b0f996e4c30e9e0962a2683493cc2e4f0
2022-04-03T20:41:44.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/InformalToFormalLincoln34
1
null
transformers
31,081
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln34") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln34") ``` ``` - moviepass to return - this summer - swooped up by - original co-founder stacy spikes text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes. *** - middle schools do not have recess - should get back to doing it - amazing for communication - and getting kids to move around text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity. *** - ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence.
jaeyeon/wav2vec2-large-xls-r-300m-en-colab
4ac322367dc30c3069f124f0946fb171213bd2ef
2022-04-07T11:06:38.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "dataset:librispeech_asr", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jaeyeon
null
jaeyeon/wav2vec2-large-xls-r-300m-en-colab
1
null
transformers
31,082
--- license: apache-2.0 tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: wav2vec2-large-xls-r-300m-en-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-en-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.1169 - Wer: 0.0597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.6951 | 0.22 | 100 | 3.1606 | 1.0 | | 2.924 | 0.45 | 200 | 2.9297 | 1.0 | | 2.5328 | 0.67 | 300 | 1.4339 | 0.8953 | | 0.8611 | 0.9 | 400 | 0.6104 | 0.5306 | | 0.3714 | 1.12 | 500 | 0.2497 | 0.2150 | | 0.2015 | 1.35 | 600 | 0.1853 | 0.1615 | | 0.1593 | 1.57 | 700 | 0.1613 | 0.1366 | | 0.1436 | 1.79 | 800 | 0.1503 | 0.1311 | | 0.1249 | 2.02 | 900 | 0.1374 | 0.1038 | | 0.0936 | 2.24 | 1000 | 0.1328 | 0.1016 | | 0.0896 | 2.47 | 1100 | 0.1234 | 0.0942 | | 0.0872 | 2.69 | 1200 | 0.1148 | 0.0922 | | 0.0859 | 2.91 | 1300 | 0.1140 | 0.0892 | | 0.0733 | 3.14 | 1400 | 0.1134 | 0.0839 | | 0.0633 | 3.36 | 1500 | 0.1085 | 0.0802 | | 0.0567 | 3.59 | 1600 | 0.1103 | 0.0807 | | 0.0604 | 3.81 | 1700 | 0.1088 | 0.0809 | | 0.0586 | 4.04 | 1800 | 0.1113 | 0.0804 | | 0.0516 | 4.26 | 1900 | 0.1123 | 0.0808 | | 0.055 | 4.48 | 2000 | 0.1130 | 0.0764 | | 0.0568 | 4.71 | 2100 | 0.1128 | 0.0807 | | 0.0529 | 4.93 | 2200 | 0.1009 | 0.0727 | | 0.0455 | 5.16 | 2300 | 0.1050 | 0.0726 | | 0.0443 | 5.38 | 2400 | 0.1078 | 0.0720 | | 0.0434 | 5.61 | 2500 | 0.1027 | 0.0702 | | 0.0418 | 5.83 | 2600 | 0.1009 | 0.0693 | | 0.0381 | 6.05 | 2700 | 0.1079 | 0.0689 | | 0.0344 | 6.28 | 2800 | 0.1062 | 0.0678 | | 0.0353 | 6.5 | 2900 | 0.1054 | 0.0682 | | 0.0342 | 6.73 | 3000 | 0.1030 | 0.0661 | | 0.0329 | 6.95 | 3100 | 0.1021 | 0.0659 | | 0.0316 | 7.17 | 3200 | 0.1085 | 0.0667 | | 0.0275 | 7.4 | 3300 | 0.1089 | 0.0645 | | 0.0275 | 7.62 | 3400 | 0.1064 | 0.0645 | | 0.0268 | 7.85 | 3500 | 0.1109 | 0.0639 | | 0.0259 | 8.07 | 3600 | 0.1123 | 0.0636 | | 0.024 | 8.3 | 3700 | 0.1169 | 0.0631 | | 0.0225 | 8.52 | 3800 | 0.1170 | 0.0617 | | 0.0229 | 8.74 | 3900 | 0.1153 | 0.0614 | | 0.0214 | 8.97 | 4000 | 0.1143 | 0.0610 | | 0.02 | 9.19 | 4100 | 0.1162 | 0.0606 | | 0.0194 | 9.42 | 4200 | 0.1173 | 0.0603 | | 0.0193 | 9.64 | 4300 | 0.1184 | 0.0601 | | 0.0177 | 9.87 | 4400 | 0.1169 | 0.0597 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
MrYiRen/DialoGPT-small-harrypotter
15cca8d96fae801f913700cd9bc79fb8cd7faf72
2022-04-04T07:39:25.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
MrYiRen
null
MrYiRen/DialoGPT-small-harrypotter
1
null
transformers
31,083
--- tags: - conversational --- # Harry Potter DialoGPT Model
microsoft/cvt-21-384-22k
813100c6a0cf8157243eac067667eb3a96564c09
2022-05-18T16:16:59.000Z
[ "pytorch", "cvt", "image-classification", "dataset:imagenet-1k", "arxiv:2103.15808", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
microsoft
null
microsoft/cvt-21-384-22k
1
null
transformers
31,084
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Convolutional Vision Transformer (CvT) CvT-21 model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://github.com/microsoft/CvT). Disclaimer: The team releasing CvT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, CvtForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-21-384-22k') model = CvtForImageClassification.from_pretrained('microsoft/cvt-21-384-22k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
gao-huggingface/T5-IDX-Parent
0bc3846a2141fa90bb8657e616989583815d14c2
2022-04-04T15:57:54.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
gao-huggingface
null
gao-huggingface/T5-IDX-Parent
1
null
transformers
31,085
Entry not found
birgermoell/psst-common-voice
ca88d7f1e5c536a0bf1e937ebcef254bb43ed323
2022-04-04T18:06:17.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
birgermoell
null
birgermoell/psst-common-voice
1
null
transformers
31,086
Entry not found
Erfan/Test_model0
cd6fc5f6d239ce2bf47d97876f07c00ab95da9de
2022-04-04T21:36:06.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Erfan
null
Erfan/Test_model0
1
null
transformers
31,087
Entry not found
deepspeechvision/wav2vec2hindiasr
f96326e9bcb8abc02bb49effb4eaee0964e0877c
2022-04-05T16:16:09.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
deepspeechvision
null
deepspeechvision/wav2vec2hindiasr
1
null
transformers
31,088
Entry not found
Diya-999/fdBart-FNS
c7fb007b56b0338c837e53f4ec4d11afdbcad23a
2022-04-17T16:01:17.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
Diya-999
null
Diya-999/fdBart-FNS
1
null
transformers
31,089
--- license: afl-3.0 ---
inigopm/bert-finetuned-squad
efc7b0d872d807cd090b37219a9c31d963cfb35c
2022-05-11T15:10:08.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
inigopm
null
inigopm/bert-finetuned-squad
1
null
transformers
31,090
Entry not found
MrYiRen/DialoGPT-small-harrypotter2
2aa7c4bd32875dbf127b502015e93dfe42a638ad
2022-04-05T14:04:02.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
MrYiRen
null
MrYiRen/DialoGPT-small-harrypotter2
1
null
transformers
31,091
--- tags: - conversational --- # Harry Potter2 DialoGPT Model
AnonymousSub/fpdm_bert_FT_new_newsqa
dae2aeaf6b8d6195a6090f52a5fe156b0f45eabd
2022-04-05T14:41:52.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
AnonymousSub
null
AnonymousSub/fpdm_bert_FT_new_newsqa
1
null
transformers
31,092
Entry not found
AnonymousSub/fpdm_hier_roberta_FT_new_newsqa
ca5f9cd4298d286683efe44f454f5c7bab690a4c
2022-04-05T15:01:01.000Z
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
AnonymousSub
null
AnonymousSub/fpdm_hier_roberta_FT_new_newsqa
1
null
transformers
31,093
Entry not found
quincyqiang/bert-base-uncased-finetuned-swag
32a60650393d93ad713e40d2970e880608d66d49
2022-04-05T15:59:49.000Z
[ "pytorch", "tensorboard", "bert", "multiple-choice", "dataset:swag", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
multiple-choice
false
quincyqiang
null
quincyqiang/bert-base-uncased-finetuned-swag
1
null
transformers
31,094
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 1.0397 - Accuracy: 0.7892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.756 | 1.0 | 4597 | 0.6021 | 0.7646 | | 0.3978 | 2.0 | 9194 | 0.6617 | 0.7783 | | 0.1468 | 3.0 | 13791 | 1.0397 | 0.7892 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.0+cu111 - Datasets 1.17.0 - Tokenizers 0.11.6
rowan1224/distilbert-slp
fd6827bf85a2a62c861aeaf5fa10bd3f02c3c579
2022-04-05T16:46:16.000Z
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
rowan1224
null
rowan1224/distilbert-slp
1
null
transformers
31,095
Entry not found
rowan1224/electra-squad-slp
0a56022ba06428731690941aae39776e19481eb7
2022-04-05T16:47:59.000Z
[ "pytorch", "electra", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
rowan1224
null
rowan1224/electra-squad-slp
1
null
transformers
31,096
Entry not found
novarac23/xlm-roberta-base-finetuned-panx-de
9d0a8dd0db9ad9881c7c6e7068708394984d846a
2022-04-05T18:26:07.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
novarac23
null
novarac23/xlm-roberta-base-finetuned-panx-de
1
null
transformers
31,097
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.862669465085938 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1374 - F1: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2596 | 1.0 | 525 | 0.1571 | 0.8302 | | 0.1292 | 2.0 | 1050 | 0.1416 | 0.8455 | | 0.0809 | 3.0 | 1575 | 0.1374 | 0.8627 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
miesnerjacob/marian-finetuned-kde4-en-to-fr
25e0a2b6fdb6a2dc83bad9ac07a4a57db6db5412
2022-04-05T20:28:41.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:kde4", "transformers", "translation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
translation
false
miesnerjacob
null
miesnerjacob/marian-finetuned-kde4-en-to-fr
1
null
transformers
31,098
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-fr metrics: - name: Bleu type: bleu value: 52.94560734092563 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.9456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
CenIA/albert-xlarge-spanish-finetuned-qa-tar
fc04b4aa4070e495d95b17acb846f3d4804061c2
2022-04-05T18:55:50.000Z
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
CenIA
null
CenIA/albert-xlarge-spanish-finetuned-qa-tar
1
null
transformers
31,099
Entry not found