modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
laituan245/molt5-base
7f2da6b30fd8ddc55b7867d53ce75e09bb85f284
2022-05-03T18:07:36.000Z
[ "pytorch", "t5", "text2text-generation", "arxiv:2204.11817", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/molt5-base
1
null
transformers
31,600
--- license: apache-2.0 --- ## Example Usage ```python from transformers import AutoTokenizer, T5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-base", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base') ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
PSW/half_senttrm_del_seed1
65b2f5ff1b2b5fbd80820678404aed35be646650
2022-05-03T18:08:22.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/half_senttrm_del_seed1
1
null
transformers
31,601
Entry not found
PSW/half_senttrm_del_seed27
96745b96be91183d525780bdc2ad385eb94b9e8a
2022-05-03T18:51:09.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/half_senttrm_del_seed27
1
null
transformers
31,602
Entry not found
PSW/half_senttrm_del_seed42
7a359ec9803b59760e1b12577816aa826a5717de
2022-05-03T19:33:45.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/half_senttrm_del_seed42
1
null
transformers
31,603
Entry not found
BigSalmon/ConciseAndFormal
a154d7baa1558083e38a8c52a41f6b156230a3c2
2022-05-03T19:42:53.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
BigSalmon
null
BigSalmon/ConciseAndFormal
1
null
transformers
31,604
how to start prompt: ``` wordy: ``` example: ``` wordy: the ndp has turned into the country's darling of the young. ``` output: ``` the ndp is youth-driven. ``` OR ``` informal english: ``` example: ``` informal english: corn fields are all across illinois, visible once you leave chicago. ``` output: ``` corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. ```
BigSalmon/InformalToFormalLincoln41
b4745d040399f5dd7962398f705e29de4f5eda93
2022-05-03T20:07:25.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/InformalToFormalLincoln41
1
null
transformers
31,605
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln41") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln41") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ```
PSW/min_senttrm_ins_seed27
26b1b9890d881251beb5edec7bc9a8c6574185f4
2022-05-03T20:59:39.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/min_senttrm_ins_seed27
1
null
transformers
31,606
Entry not found
Dizzykong/gpt2-quests-eos
4fde4a91bebe5deee577e69ac24b9798b5490c03
2022-05-03T21:14:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
Dizzykong
null
Dizzykong/gpt2-quests-eos
1
null
transformers
31,607
Entry not found
PSW/min_senttrm_ins_seed42
ac773f7b5cd225b3c62083c12748ffa3875e59b1
2022-05-03T21:42:27.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/min_senttrm_ins_seed42
1
null
transformers
31,608
Entry not found
PSW/max_senttrm_ins_seed1
e3074fe8369467bdebb6e66226beca19e7044f3e
2022-05-03T22:25:48.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/max_senttrm_ins_seed1
1
null
transformers
31,609
Entry not found
eastmountaincode/newDuneModel
b395d21190e6f2c3708ed899e658d3fac8f2b159
2022-05-03T23:59:31.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
eastmountaincode
null
eastmountaincode/newDuneModel
1
null
transformers
31,610
Entry not found
PSW/half_senttrm_ins_seed1
891302a1dac6421a045fc1638c36a5d0d37f90a5
2022-05-04T00:28:54.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/half_senttrm_ins_seed1
1
null
transformers
31,611
Entry not found
PSW/half_senttrm_ins_seed27
581133ed28149281224183d9f0de2829e0dd5659
2022-05-04T01:19:05.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/half_senttrm_ins_seed27
1
null
transformers
31,612
Entry not found
emolyscheisse/DialoGPT-small-mandybot
17038689b38acfbd135b4bb1408078f0b001a11c
2022-05-04T01:30:56.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
emolyscheisse
null
emolyscheisse/DialoGPT-small-mandybot
1
null
transformers
31,613
--- tags: - conversational --- # Mandy Bot DialoGPT Model
PSW/half_senttrm_ins_seed42
6729c2763f73e1c0a43153fa02fef7c0bbb1daed
2022-05-04T02:02:06.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/half_senttrm_ins_seed42
1
null
transformers
31,614
Entry not found
PSW/senttrm_swap_seed1
5d289452ac9372b7b11e9316206d1a7163836456
2022-05-04T02:45:10.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/senttrm_swap_seed1
1
null
transformers
31,615
Entry not found
PSW/senttrm_swap_seed27
72a2765936463d1c3e5bf0398a4d51cec72339b0
2022-05-04T03:28:13.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/senttrm_swap_seed27
1
null
transformers
31,616
Entry not found
PSW/senttrm_swap_seed42
8232894284e63d08db7be3c27e704b9ff5502af5
2022-05-04T04:11:22.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/senttrm_swap_seed42
1
null
transformers
31,617
Entry not found
iis2009002/xlm-roberta-base-finetuned-panx-de
26c2d66932ebe21a9f97c27d0dd6226fc7ba9c23
2022-05-12T06:54:05.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
iis2009002
null
iis2009002/xlm-roberta-base-finetuned-panx-de
1
null
transformers
31,618
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8627004891366169 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1363 - F1: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 | | 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 | | 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
jordimas/pronoms-prediction
569f1cab1b8488313de4ac50dc3738de58600de2
2022-05-04T10:47:54.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "license:mit", "autotrain_compatible" ]
token-classification
false
jordimas
null
jordimas/pronoms-prediction
1
null
transformers
31,619
--- license: mit ---
jonfrank/xlm-roberta-base-finetuned-panx-de
f76ff8103908e3d89da23d34a9f93f9402b16e11
2022-05-04T10:13:21.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
jonfrank
null
jonfrank/xlm-roberta-base-finetuned-panx-de
1
null
transformers
31,620
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8654425558524246 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1334 - F1: 0.8654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2541 | 1.0 | 525 | 0.1596 | 0.8242 | | 0.1284 | 2.0 | 1050 | 0.1360 | 0.8499 | | 0.0827 | 3.0 | 1575 | 0.1334 | 0.8654 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
PSW/mixed_sim4_seed27
144cc24ca14b7e1f2704be82619d40e5bbf8f127
2022-05-04T09:58:44.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/mixed_sim4_seed27
1
null
transformers
31,621
Entry not found
iis2009002/xlm-roberta-base-finetuned-panx-de-fr
4f2a40077901b59e0fda84ef224332c68338cf76
2022-05-12T07:03:30.000Z
[ "pytorch", "xlm-roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
iis2009002
null
iis2009002/xlm-roberta-base-finetuned-panx-de-fr
1
null
transformers
31,622
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1644 - F1: 0.8617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 | | 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 | | 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
neelan-elucidate-ai/wav2vec2-tcrs-runtest
306421474ec94d33f6f46ac79dab5a7bba7ce936
2022-05-04T16:33:48.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
neelan-elucidate-ai
null
neelan-elucidate-ai/wav2vec2-tcrs-runtest
1
null
transformers
31,623
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-tcrs-runtest results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-tcrs-runtest This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1370 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 22.437 | 1.43 | 10 | 36.3252 | 1.0 | | 14.7939 | 2.86 | 20 | 10.7441 | 1.0 | | 4.1824 | 4.29 | 30 | 3.7354 | 1.0 | | 3.289 | 5.71 | 40 | 3.5265 | 1.0 | | 3.1639 | 7.14 | 50 | 3.2868 | 1.0 | | 3.1107 | 8.57 | 60 | 3.3268 | 1.0 | | 3.0737 | 10.0 | 70 | 3.1149 | 1.0 | | 3.0273 | 11.43 | 80 | 3.2031 | 1.0 | | 3.0422 | 12.86 | 90 | 3.0771 | 1.0 | | 2.9957 | 14.29 | 100 | 3.0418 | 1.0 | | 2.9894 | 15.71 | 110 | 3.0321 | 1.0 | | 2.9997 | 17.14 | 120 | 3.0545 | 1.0 | | 2.9806 | 18.57 | 130 | 2.9936 | 1.0 | | 2.969 | 20.0 | 140 | 3.0322 | 1.0 | | 2.9692 | 21.43 | 150 | 3.0238 | 1.0 | | 2.9638 | 22.86 | 160 | 3.0407 | 1.0 | | 2.969 | 24.29 | 170 | 3.2487 | 1.0 | | 2.9783 | 25.71 | 180 | 3.1248 | 1.0 | | 2.9576 | 27.14 | 190 | 3.0880 | 1.0 | | 2.968 | 28.57 | 200 | 3.0962 | 1.0 | | 2.9784 | 30.0 | 210 | 3.1370 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
PSW/mixed_sim4_seed42
a47167b8ccf90cea2ac92352c95b78fd801f4ab2
2022-05-04T10:41:35.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/mixed_sim4_seed42
1
null
transformers
31,624
Entry not found
iis2009002/xlm-roberta-base-finetuned-panx-fr
2b3e5a13b151f2ead122bfadc2c5260c3b8e37b3
2022-05-12T07:06:21.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:xtreme", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
iis2009002
null
iis2009002/xlm-roberta-base-finetuned-panx-fr
1
null
transformers
31,625
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.835464333781965 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2867 - F1: 0.8355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5817 | 1.0 | 191 | 0.3395 | 0.7854 | | 0.2617 | 2.0 | 382 | 0.2856 | 0.8278 | | 0.1708 | 3.0 | 573 | 0.2867 | 0.8355 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
PSW/senttrm_mix_seed1
fca2cd317f0259f0285ea29361006a46532600cc
2022-05-04T11:24:43.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/senttrm_mix_seed1
1
null
transformers
31,626
Entry not found
PSW/senttrm_mix_seed27
2b5b90d6ca85bfd01ecbf35e77eaaab541910035
2022-05-04T12:07:34.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/senttrm_mix_seed27
1
null
transformers
31,627
Entry not found
PSW/senttrm_mix_seed42
6a311f9cea0c263b653a0121008475fbd098174f
2022-05-04T12:50:51.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/senttrm_mix_seed42
1
null
transformers
31,628
Entry not found
yvesconst/mt5-ftune-edu-qg-fr
b42763e959c98fc97b8365af98774425a6278c30
2022-05-04T13:06:05.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
yvesconst
null
yvesconst/mt5-ftune-edu-qg-fr
1
null
transformers
31,629
--- license: apache-2.0 ---
Danastos/dpr-question_encoder_el_custom
5f6442b8c6fc7e5d26d9db8ab097ff4b5a6e7128
2022-05-04T16:04:51.000Z
[ "pytorch", "dpr", "transformers" ]
null
false
Danastos
null
Danastos/dpr-question_encoder_el_custom
1
null
transformers
31,630
Entry not found
lilitket/20220504-155308
3f1e53dd731e59f0e670e7d5aa18765aa4b3af2a
2022-05-04T19:07:34.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
lilitket
null
lilitket/20220504-155308
1
null
transformers
31,631
Entry not found
huggingtweets/zacksteffen_
5aa4b00b2c28faf674aa5ce8e39cbf9f4ca5b23a
2022-05-04T16:16:32.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/zacksteffen_
1
null
transformers
31,632
--- language: en thumbnail: http://www.huggingtweets.com/zacksteffen_/1651680987265/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Zack Steffen</div> <div style="text-align: center; font-size: 14px;">@zacksteffen_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Zack Steffen. | Data | Zack Steffen | | --- | --- | | Tweets downloaded | 3120 | | Retweets | 869 | | Short tweets | 523 | | Tweets kept | 1728 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nz1w2dd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zacksteffen_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lqwnrcja) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lqwnrcja/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/zacksteffen_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
theojolliffe/bart-large-cnn-finetuned-roundup-2-1
62e872cf0308345ff6fe211f27fb59c27f3c14d7
2022-05-04T16:57:42.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
theojolliffe
null
theojolliffe/bart-large-cnn-finetuned-roundup-2-1
1
null
transformers
31,633
--- license: mit tags: - generated_from_trainer model-index: - name: bart-large-cnn-finetuned-roundup-2-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-2-1 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 167 | 1.2456 | 51.7546 | 32.4725 | 33.0461 | 49.0513 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
theojolliffe/bart-large-cnn-finetuned-roundup-2-2
062c5bdc340e99c59d4245e8e61fbb025b0dbe76
2022-05-05T14:02:16.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
theojolliffe
null
theojolliffe/bart-large-cnn-finetuned-roundup-2-2
1
null
transformers
31,634
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-finetuned-roundup-2-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-2-2 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1521 - Rouge1: 52.6634 - Rouge2: 32.537 - Rougel: 33.3148 - Rougelsum: 50.148 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 167 | 1.2139 | 52.546 | 32.4912 | 32.9529 | 49.8241 | 142.0 | | No log | 2.0 | 334 | 1.1521 | 52.6634 | 32.537 | 33.3148 | 50.148 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
lilitket/20220504-180816
7b4ffb7382370d89ac7e21a6b685486697ed902a
2022-05-04T19:46:54.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
lilitket
null
lilitket/20220504-180816
1
null
transformers
31,635
Entry not found
huggingtweets/kanyewest-usmnt-zlisto
cf4bccd69aa61490fe7e80f2fdf85681e2695015
2022-05-04T19:29:40.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/kanyewest-usmnt-zlisto
1
null
transformers
31,636
--- language: en thumbnail: http://www.huggingtweets.com/kanyewest-usmnt-zlisto/1651692574685/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1486104199763013632/uC8Ujhgj_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & ye & Tauhid R. Zaman</div> <div style="text-align: center; font-size: 14px;">@kanyewest-usmnt-zlisto</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from USMNT & ye & Tauhid R. Zaman. | Data | USMNT | ye | Tauhid R. Zaman | | --- | --- | --- | --- | | Tweets downloaded | 3247 | 1858 | 3098 | | Retweets | 600 | 188 | 1232 | | Short tweets | 215 | 573 | 106 | | Tweets kept | 2432 | 1097 | 1760 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gvuccyzi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kanyewest-usmnt-zlisto's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1no8s780) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1no8s780/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/kanyewest-usmnt-zlisto') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BigSalmon/GPT2InformalToFormalLincoln42
d0837910633143354ad7f69d63f0983200c06a77
2022-05-04T20:21:23.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
BigSalmon
null
BigSalmon/GPT2InformalToFormalLincoln42
1
null
transformers
31,637
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPT2InformalToFormalLincoln42") model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPT2InformalToFormalLincoln42") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ```
Yanhao/simcse-bert-large-uncased
678c1ce1c4c6a31b7ade4cbb3833cbf202759a75
2022-05-04T22:07:20.000Z
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
false
Yanhao
null
Yanhao/simcse-bert-large-uncased
1
null
transformers
31,638
Entry not found
lilitket/20220504-221523
03ba9ccd469d9adbc9c44ef7aee9104d7684ee40
2022-05-05T22:11:06.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
lilitket
null
lilitket/20220504-221523
1
null
transformers
31,639
Entry not found
lilitket/20220504-221549
fce063d29dcf76f0c3d8c3d656c0f14c2a0031b5
2022-05-05T11:34:56.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
lilitket
null
lilitket/20220504-221549
1
null
transformers
31,640
Entry not found
laituan245/t5-v1_1-small-caption2smiles
b64e522b5f456e24bf3a5908c0cb9ef31bda18cd
2022-05-05T00:10:18.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/t5-v1_1-small-caption2smiles
1
null
transformers
31,641
--- license: apache-2.0 ---
laituan245/t5-v1_1-small-smiles2caption
cf731029d7ceaa6cdb5e54c3eb70e4a048af2570
2022-05-05T00:17:34.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/t5-v1_1-small-smiles2caption
1
null
transformers
31,642
--- license: apache-2.0 ---
laituan245/t5-v1_1-large-smiles2caption
4c3afa02a8789c81f980f015d381302a27fc05f4
2022-05-05T01:04:13.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/t5-v1_1-large-smiles2caption
1
null
transformers
31,643
--- license: apache-2.0 ---
laituan245/t5-v1_1-small-smiles2caption-ft-from-pretrained-c4
5eca383ddc521f7da8879bc56b869529e835ec4c
2022-05-05T02:16:45.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/t5-v1_1-small-smiles2caption-ft-from-pretrained-c4
1
null
transformers
31,644
Entry not found
laituan245/t5-v1_1-small-caption2smiles-ft-from-pretrained-c4
4971a4da61aaa050c98ffe87ac0f0535359526d9
2022-05-05T02:23:10.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/t5-v1_1-small-caption2smiles-ft-from-pretrained-c4
1
null
transformers
31,645
Entry not found
laituan245/t5-v1_1-small-smiles2caption-ft-from-pretrained-zinc
0b7df225bac6305770a31352d2341875ef457343
2022-05-05T02:37:22.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
laituan245
null
laituan245/t5-v1_1-small-smiles2caption-ft-from-pretrained-zinc
1
null
transformers
31,646
Entry not found
aaatul/xlm-roberta-large-finetuned-ner
5c5b5b724333833ca6074be2e98e975360a993de
2022-06-01T09:06:31.000Z
[ "pytorch", "xlm-roberta", "token-classification", "dataset:hi_ner_config", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
aaatul
null
aaatul/xlm-roberta-large-finetuned-ner
1
null
transformers
31,647
--- license: mit tags: - generated_from_trainer datasets: - hi_ner_config model-index: - name: xlm-roberta-large-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-ner This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the hi_ner_config dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
PSW/low_resource_percent1_minmaxswap_seed27
55ece750e43094df71d22fffb17cc11e8398108d
2022-05-05T07:02:47.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_minmaxswap_seed27
1
null
transformers
31,648
Entry not found
PSW/low_resource_percent1_randomdel_seed1
7cca13d828cb4ec8cf979a29098c17cd0a900fb5
2022-05-05T07:57:20.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_randomdel_seed1
1
null
transformers
31,649
Entry not found
PSW/low_resource_percent1_randomdel_seed27
1fe83ef7f49863eaf7ab7ad37b766461e7f773a2
2022-05-05T08:08:34.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_randomdel_seed27
1
null
transformers
31,650
Entry not found
DioLiu/distilroberta-base-wiki_shake_mask
bc370a92ecfd1afc50258a77110fcf5ce093d1fd
2022-05-05T09:26:08.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
DioLiu
null
DioLiu/distilroberta-base-wiki_shake_mask
1
null
transformers
31,651
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-wiki_shake_mask results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-wiki_shake_mask This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6528 | 1.0 | 3015 | 2.5390 | | 2.5536 | 2.0 | 6030 | 2.4558 | | 2.5396 | 3.0 | 9045 | 2.4464 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
PSW/low_resource_percent1_randomins_seed1
fc3a54ae34f80bd1e0dd7f254863f489f20821a8
2022-05-05T08:29:39.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_randomins_seed1
1
null
transformers
31,652
Entry not found
PSW/low_resource_percent1_randomins_seed27
d0c39f940ebe3ed0df6e480b52abcf94afbf6200
2022-05-05T08:40:17.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_randomins_seed27
1
null
transformers
31,653
Entry not found
PSW/low_resource_percent1_randomswap_seed1
24f07b1a840dd28b309e78883b1a4fedb2c06ea9
2022-05-05T09:01:52.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent1_randomswap_seed1
1
null
transformers
31,654
Entry not found
CarlCochet/trajectory-transformer-ant-medium-expert-v2
b71b730ed68c338836de09b07fd97102785af892
2022-05-12T16:56:30.000Z
[ "pytorch", "trajectory_transformer", "feature-extraction", "transformers", "license:mit" ]
feature-extraction
false
CarlCochet
null
CarlCochet/trajectory-transformer-ant-medium-expert-v2
1
null
transformers
31,655
--- license: mit ---
CarlCochet/trajectory-transformer-ant-medium-replay-v2
245182042697164e959373281ecee709f5769eba
2022-05-12T16:57:17.000Z
[ "pytorch", "trajectory_transformer", "feature-extraction", "transformers", "license:mit" ]
feature-extraction
false
CarlCochet
null
CarlCochet/trajectory-transformer-ant-medium-replay-v2
1
null
transformers
31,656
--- license: mit ---
PSW/low_resource_percent10_maxsimins_seed1
1a40e96c99fd1c8f270ccef268196f2b1db5fa5f
2022-05-05T10:00:10.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_maxsimins_seed1
1
null
transformers
31,657
Entry not found
adityay1221/cat.5.32
b12f2f929f126f74333536e7276d8dc53d9c962a
2022-05-05T09:58:36.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
adityay1221
null
adityay1221/cat.5.32
1
null
transformers
31,658
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: cat.5.32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cat.5.32 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0293 - Bleu: 25.3811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 121 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
PSW/low_resource_percent10_maxsimins_seed27
54ac78f99d9a3510dd235c156d38ed51ba7fea1b
2022-05-05T10:13:41.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_maxsimins_seed27
1
null
transformers
31,659
Entry not found
PSW/low_resource_percent10_maxsimins_seed42
5e235fc09aef85d9aac74a19bf459c9554890e7d
2022-05-05T10:28:50.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_maxsimins_seed42
1
null
transformers
31,660
Entry not found
PSW/low_resource_percent10_minmaxswap_seed1
deca3c84d93bbd7d8d16d4e3bd31d6e45ffa6dfe
2022-05-05T10:44:01.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_minmaxswap_seed1
1
null
transformers
31,661
Entry not found
PSW/low_resource_percent10_minmaxswap_seed27
8008cbd7f29520aad3541b9ba8a83e14f0953d0a
2022-05-05T10:59:47.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_minmaxswap_seed27
1
null
transformers
31,662
Entry not found
masakhane/afrimbart_en_lug_news
3ad3f5ef4886f7e64aec78c5e994b7e63cb207ba
2022-05-05T13:41:25.000Z
[ "pytorch", "mbart", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afrimbart_en_lug_news
1
null
transformers
31,663
--- license: afl-3.0 ---
masakhane/afrimt5_en_lug_news
e8057600da66fc3b21c7e161f9ba4d5fc8a5440c
2022-05-05T13:41:11.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afrimt5_en_lug_news
1
null
transformers
31,664
--- license: afl-3.0 ---
masakhane/afribyt5_lug_en_news
d809310beaf6eeba769f3e9582e007651c7741d4
2022-05-05T13:50:17.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/afribyt5_lug_en_news
1
null
transformers
31,665
--- license: afl-3.0 ---
masakhane/mt5_lug_en_news
7a32b8c55a04711aa862cb6771c8e0637e9d3f61
2022-05-05T14:04:28.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/mt5_lug_en_news
1
null
transformers
31,666
--- license: afl-3.0 ---
aware-ai/wav2vec2-xls-r-300m-german
f1e207210170e9df66086ea0947a34eae7ac4c46
2022-05-28T07:50:30.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "de", "transformers", "mozilla-foundation/common_voice_9_0", "generated_from_trainer", "model-index" ]
automatic-speech-recognition
false
aware-ai
null
aware-ai/wav2vec2-xls-r-300m-german
1
null
transformers
31,667
--- language: - de tags: - automatic-speech-recognition - mozilla-foundation/common_voice_9_0 - generated_from_trainer model-index: - name: wav2vec2-xls-r-300m-german results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-german This model is a fine-tuned version of [wav2vec2-xls-r-300m-german](https://huggingface.co/wav2vec2-xls-r-300m-german) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.4304 - Wer: 0.4507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3598 | 1.0 | 569 | 0.4313 | 0.4512 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.1+cu113 - Datasets 2.1.0 - Tokenizers 0.11.0
masakhane/m2m100_418M_en_lug_rel_news
2ad7f847917340cf8e066492498b22e144b28ff8
2022-05-05T14:14:06.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_en_lug_rel_news
1
null
transformers
31,668
--- license: afl-3.0 ---
masakhane/m2m100_418M_en_lug_rel_news_ft
d8cdc76faa6b2e106203a1afe3aae275b496b055
2022-05-05T14:22:53.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_en_lug_rel_news_ft
1
null
transformers
31,669
--- license: afl-3.0 ---
masakhane/m2m100_418M_lug_en_rel_ft
633b5cda871da5871260bd112bfd21787215758b
2022-05-05T14:23:03.000Z
[ "pytorch", "m2m_100", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
masakhane
null
masakhane/m2m100_418M_lug_en_rel_ft
1
null
transformers
31,670
--- license: afl-3.0 ---
PSW/low_resource_percent10_minsimdel_seed1
e7235bdd00b937ae13277bdad79013e4b0dca069
2022-05-05T11:29:24.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_minsimdel_seed1
1
null
transformers
31,671
Entry not found
Theimisa/distilbert-base-uncased-aisera_texts
680317f14f39dfbc4eb2cbaccf0cf97cfe07d5c4
2022-05-09T09:49:59.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
Theimisa
null
Theimisa/distilbert-base-uncased-aisera_texts
1
null
transformers
31,672
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-aisera_texts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-aisera_texts This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.0694 | 1.0 | 7790 | 1.9868 | | 1.9054 | 2.0 | 15580 | 1.8646 | | 1.8701 | 3.0 | 23370 | 1.8283 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Tokenizers 0.12.1
PSW/low_resource_percent10_randomins_seed1
167fd3154c8f29327eaa71dacadcf37fefbe11c6
2022-05-05T12:59:30.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomins_seed1
1
null
transformers
31,673
Entry not found
dyyyyyyyy/xTune_panx_XLM-RoBERTa-base
56a9a34bfa2fbe06fc3bef07d744bd3fa04858a0
2022-05-05T14:06:17.000Z
[ "pytorch", "xlm-roberta", "transformers" ]
null
false
dyyyyyyyy
null
dyyyyyyyy/xTune_panx_XLM-RoBERTa-base
1
null
transformers
31,674
Entry not found
PSW/low_resource_percent10_randomswap_seed27
27158ac7947ebfe400a239256baad98daf4fb641
2022-05-05T13:56:00.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent10_randomswap_seed27
1
null
transformers
31,675
Entry not found
tau/False_large_random_para0_sent1_span2_itFalse_sargmax_rrFalse_8_1024_0.15_1
dccc3efe38e97f3c9f1bd1274af89ab98c698289
2022-05-05T14:00:00.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tau
null
tau/False_large_random_para0_sent1_span2_itFalse_sargmax_rrFalse_8_1024_0.15_1
1
null
transformers
31,676
Entry not found
tau/False_large_rouge_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.15_1
519692256604a734a836729ecfe9c558324db32b
2022-05-05T13:59:39.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tau
null
tau/False_large_rouge_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.15_1
1
null
transformers
31,677
Entry not found
tau/False_large_pmi_para0_sent1_span2_itFalse_ssoftmax_rrFalse_8_1024_0.15_1
9176da21acc93eaa12fa535f2e143bd43d19e273
2022-05-05T18:18:45.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
tau
null
tau/False_large_pmi_para0_sent1_span2_itFalse_ssoftmax_rrFalse_8_1024_0.15_1
1
null
transformers
31,678
Entry not found
dyyyyyyyy/xTune_udpos_XLM-RoBERTa-base
323bfbc798c875cf3ee9dd088ace39f1b910e83f
2022-05-05T14:36:36.000Z
[ "pytorch", "xlm-roberta", "transformers" ]
null
false
dyyyyyyyy
null
dyyyyyyyy/xTune_udpos_XLM-RoBERTa-base
1
null
transformers
31,679
Entry not found
dyyyyyyyy/xTune_udpos_XLM-RoBERTa-large
4c2389d32e56333a305d0b9e0d72287c212e3cdc
2022-05-05T14:37:44.000Z
[ "pytorch", "xlm-roberta", "transformers" ]
null
false
dyyyyyyyy
null
dyyyyyyyy/xTune_udpos_XLM-RoBERTa-large
1
null
transformers
31,680
Entry not found
PSW/low_resource_percent20_maxsimins_seed27
5ce799f4cc188d3b9ee45a4e28684d4016a83b91
2022-05-05T15:43:11.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_maxsimins_seed27
1
null
transformers
31,681
Entry not found
PSW/low_resource_percent20_minmaxswap_seed1
fb6e79b3f7350b8e4c48874e5606ef52c52725c5
2022-05-05T16:16:14.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_minmaxswap_seed1
1
null
transformers
31,682
Entry not found
PSW/low_resource_percent20_minmaxswap_seed27
ba569c0279f4a9e57c995e1c9de6bcd5b5a0fc74
2022-05-05T16:32:10.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_minmaxswap_seed27
1
null
transformers
31,683
Entry not found
PSW/low_resource_percent20_minsimdel_seed1
4c5e5fc19d7d68a550efb47dc9ba813c3ccad491
2022-05-05T16:59:29.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_minsimdel_seed1
1
null
transformers
31,684
Entry not found
PSW/low_resource_percent20_randomdel_seed27
2ca3b3959a209f2021a96421f25d56cf369e1436
2022-05-05T17:53:21.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomdel_seed27
1
null
transformers
31,685
Entry not found
zhanxw/test
869dd8d5d376d04647347c7d3919d0211cce6cd6
2022-05-05T18:25:47.000Z
[ "pytorch", "swin", "image-classification", "transformers", "license:mit" ]
image-classification
false
zhanxw
null
zhanxw/test
1
null
transformers
31,686
--- license: mit ---
PSW/low_resource_percent20_randomdel_seed42
c670f15d134e796df8070cc30e0bdbca76121e0c
2022-05-05T18:09:39.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomdel_seed42
1
null
transformers
31,687
Entry not found
PSW/low_resource_percent20_randomins_seed1
14194b8863cabba8207337bf849bbebf45d02d53
2022-05-05T18:26:16.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomins_seed1
1
null
transformers
31,688
Entry not found
dyyyyyyyy/MVR_panx_XLM-RoBERTa-large
217412927476626ba425b04cd99a03bb5b5deea4
2022-05-06T05:21:29.000Z
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dyyyyyyyy
null
dyyyyyyyy/MVR_panx_XLM-RoBERTa-large
1
null
transformers
31,689
Entry not found
dyyyyyyyy/MVR_squad_XLM-RoBERTa-large
cc3035aeeb7c7540b795bfc1802b4bff3351305a
2022-05-06T06:52:25.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dyyyyyyyy
null
dyyyyyyyy/MVR_squad_XLM-RoBERTa-large
1
null
transformers
31,690
Entry not found
dyyyyyyyy/MVR_squad_XLM-RoBERTa-base
ad3b7537877269c5d2bdff20af0f248c0ea000a0
2022-05-06T06:45:06.000Z
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dyyyyyyyy
null
dyyyyyyyy/MVR_squad_XLM-RoBERTa-base
1
null
transformers
31,691
Entry not found
dyyyyyyyy/MVR_panx_BERT-base-multilingual-cased
b603a82267d3780c14f17b67605d08e900a4d81a
2022-05-06T05:13:48.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dyyyyyyyy
null
dyyyyyyyy/MVR_panx_BERT-base-multilingual-cased
1
null
transformers
31,692
Entry not found
PSW/low_resource_percent20_randomins_seed27
431c4c5a5b81786d4ecb9b0c723a8c06c424acb2
2022-05-05T18:42:41.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomins_seed27
1
null
transformers
31,693
Entry not found
PSW/low_resource_percent20_randomins_seed42
a7598ba30f063deadb78c227e7198c8957ee96b5
2022-05-05T18:59:36.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomins_seed42
1
null
transformers
31,694
Entry not found
PSW/low_resource_percent20_randomswap_seed27
9eee3ab0fc4b6cb6eaeff8504fd00c33c8f83e48
2022-05-05T19:32:46.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomswap_seed27
1
null
transformers
31,695
Entry not found
abhilashawasthi/bert-base-uncased-issues-128
1e3e2439ffb321596fb1ff06d4499ae7ea082445
2022-05-05T20:17:08.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
fill-mask
false
abhilashawasthi
null
abhilashawasthi/bert-base-uncased-issues-128
1
null
transformers
31,696
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0949 | 1.0 | 291 | 1.7072 | | 1.649 | 2.0 | 582 | 1.4409 | | 1.4835 | 3.0 | 873 | 1.4099 | | 1.3938 | 4.0 | 1164 | 1.3858 | | 1.3326 | 5.0 | 1455 | 1.2004 | | 1.2949 | 6.0 | 1746 | 1.2955 | | 1.2451 | 7.0 | 2037 | 1.2682 | | 1.1992 | 8.0 | 2328 | 1.1938 | | 1.1784 | 9.0 | 2619 | 1.1686 | | 1.1397 | 10.0 | 2910 | 1.2050 | | 1.1293 | 11.0 | 3201 | 1.2058 | | 1.1006 | 12.0 | 3492 | 1.1680 | | 1.0835 | 13.0 | 3783 | 1.2414 | | 1.0757 | 14.0 | 4074 | 1.1522 | | 1.062 | 15.0 | 4365 | 1.1176 | | 1.0535 | 16.0 | 4656 | 1.2520 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
PSW/low_resource_percent20_randomswap_seed42
563d0f60db190197222e6b095feccf8bd946ef79
2022-05-05T19:49:59.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_randomswap_seed42
1
null
transformers
31,697
Entry not found
PSW/low_resource_percent20_seed42
1c7caf5f364434c1cd5b6abeac1037f4a9225d0c
2022-05-05T20:30:53.000Z
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
PSW
null
PSW/low_resource_percent20_seed42
1
null
transformers
31,698
Entry not found
huggingtweets/mikedolanvevo
b9dcd25ba01b78c7221ee23ffad1e27aebdc3e4e
2022-05-05T20:52:31.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/mikedolanvevo
1
null
transformers
31,699
--- language: en thumbnail: http://www.huggingtweets.com/mikedolanvevo/1651783946409/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1500475522291277831/EmO4IU6D_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">lil venice bitch</div> <div style="text-align: center; font-size: 14px;">@mikedolanvevo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from lil venice bitch. | Data | lil venice bitch | | --- | --- | | Tweets downloaded | 3184 | | Retweets | 426 | | Short tweets | 311 | | Tweets kept | 2447 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jhq37i4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mikedolanvevo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1emwhhe4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1emwhhe4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/mikedolanvevo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)