modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ArpanZS/debug_squad | 0682b33fce4b58062eee914f18c6c2661d115245 | 2021-12-28T10:12:34.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ArpanZS | null | ArpanZS/debug_squad | 1 | null | transformers | 27,800 | Entry not found |
Ashkanmh/bert-base-parsbert-uncased-finetuned | a042a203ef9c8956f605b5b5d797255d86fc4f23 | 2021-09-08T20:56:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Ashkanmh | null | Ashkanmh/bert-base-parsbert-uncased-finetuned | 1 | null | transformers | 27,801 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-parsbert-uncased-finetuned
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-parsbert-uncased-finetuned
This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5596 | 1.0 | 515 | 3.2097 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Asuramaru/DialoGPT-small-rintohsaka | 86b6b57955fa6b28eef2740a8b8bf622f7b151aa | 2021-09-24T19:56:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Asuramaru | null | Asuramaru/DialoGPT-small-rintohsaka | 1 | null | transformers | 27,802 | ---
tags:
- conversational
---
# RinTohsaka bot |
Augustvember/WOKKAWOKKA | b956bb2d50e9623922fa950e5009b18bbcdd470a | 2021-08-11T04:34:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Augustvember | null | Augustvember/WOKKAWOKKA | 1 | null | transformers | 27,803 | ---
tags:
- conversational
---
#MyAwesomeModel |
Augustvember/test | 2a840066c4ae4e6ea96ab9368541cb6a6cc9ec90 | 2021-08-10T07:40:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Augustvember | null | Augustvember/test | 1 | null | transformers | 27,804 | ---
tags:
- conversational
---
#MyAwesomeModel |
Augustvember/wokkabottest2 | c72656a6d8db3a879aed1eaed185a826e953908f | 2021-08-09T02:56:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Augustvember | null | Augustvember/wokkabottest2 | 1 | null | transformers | 27,805 | ---
tags:
- conversational
---
#MyAwesomeModel |
Axcel/DialoGPT-small-rick | baa9154898b28759cd97a1e0ddcba9547a5465cf | 2021-09-13T18:16:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Axcel | null | Axcel/DialoGPT-small-rick | 1 | null | transformers | 27,806 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
Aybars/ModelOnWhole | fd356c873f4480da1d7521f3a88cba0bf60f009b | 2022-02-14T06:33:52.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Aybars | null | Aybars/ModelOnWhole | 1 | null | transformers | 27,807 | Entry not found |
Aybars/XLM_Turkish | 15910d0ddbddc7465d4dbb769d9b2c077eb79c35 | 2022-02-15T10:31:35.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Aybars | null | Aybars/XLM_Turkish | 1 | null | transformers | 27,808 | Entry not found |
Ayjayo/DialoGPT-medium-AyjayoAI | b16f267e52b3f3e52b963b2ce61e90c575131703 | 2022-01-27T17:13:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ayjayo | null | Ayjayo/DialoGPT-medium-AyjayoAI | 1 | null | transformers | 27,809 | ---
tags:
- conversational
---
#Ayjayo |
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18 | c98ca251711706612f3d3fedbd797d64bb21823c | 2021-11-09T15:24:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ayran | null | Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18 | 1 | null | transformers | 27,810 | ---
tags:
- conversational
---
#DialoGPT medium model (Based on Harry Potter 1 through 4 plus 6, 18 epochs) |
AyushPJ/ai-club-inductions-21-nlp-ALBERT | 977dc4cdf568d5b4758aad9c8828da02c15adde6 | 2021-10-20T23:28:44.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | AyushPJ | null | AyushPJ/ai-club-inductions-21-nlp-ALBERT | 1 | null | transformers | 27,811 | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-ALBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-ALBERT
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad | b51e8a2f05412ca6da79014563e62c33aca6408e | 2021-10-26T10:41:20.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | AyushPJ | null | AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad | 1 | null | transformers | 27,812 | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-ELECTRA-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-ELECTRA-base-squad
This model is the deepset/electra-base-squad2 pre-trained model trained on data from AI Inductions 21 NLP competition (https://www.kaggle.com/c/ai-inductions-21-nlp) for extractive QA.
## Model description
More information needed
## Intended uses & limitations
AI Inductions 21 NLP competition
## Training and evaluation data
AI Inductions 21 NLP competition data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- max_length = 512
- doc_stride = 384
- learning_rate: 2e-05
- weight_decay=0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
AyushPJ/test-squad-trained-finetuned-squad | 2574cafa7822779b82dbf71f95bba908d37f754c | 2021-10-18T11:01:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | AyushPJ | null | AyushPJ/test-squad-trained-finetuned-squad | 1 | null | transformers | 27,813 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: test-squad-trained-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-squad-trained-finetuned-squad
This model was trained from scratch on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cu110
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Azuris/DialoGPT-medium-envy | 12bfd2c35b53e89f9565c8d91c5198418b226abc | 2021-11-21T11:23:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Azuris | null | Azuris/DialoGPT-medium-envy | 1 | null | transformers | 27,814 | ---
tags:
- conversational
---
# Echidona DialoGPT-Medium Model |
Azuris/DialoGPT-small-envy | 54026d21eaaa83bd2d388d95354bee768ec23544 | 2021-11-20T18:08:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Azuris | null | Azuris/DialoGPT-small-envy | 1 | null | transformers | 27,815 | ---
tags:
- conversational
---
# Echidona DialoGPT Model |
Babysittingyoda/DialoGPT-small-familyguy | d79a724e78bfdf2fbec970b7ea552ce5f2a36d0e | 2022-02-13T05:12:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Babysittingyoda | null | Babysittingyoda/DialoGPT-small-familyguy | 1 | null | transformers | 27,816 | ---
tags:
- conversational
---
#A Peter DialoGPT Model |
BalajiSathesh/DialoGPT-small-harrypotter | 554442283e191980ce599cdfc128c0b2bc7f80bb | 2021-10-20T07:35:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BalajiSathesh | null | BalajiSathesh/DialoGPT-small-harrypotter | 1 | null | transformers | 27,817 | ---
tags:
- conversational
---
Harry Potter DialoGPT Model |
Barleysack/AERoberta | 2261a5fecf528f91a98ae04132b1fbc741c6f64d | 2021-11-03T16:40:46.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Barleysack | null | Barleysack/AERoberta | 1 | null | transformers | 27,818 | Entry not found |
Barleysack/AERoberta2 | 7ac95ae8432adcde6d0e5aeec47f7b45c5ec8dc1 | 2021-11-03T16:42:48.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Barleysack | null | Barleysack/AERoberta2 | 1 | null | transformers | 27,819 | Entry not found |
Barleysack/klue-roberta-LSTM | 2c481845e99aa1f31ee160509319006379ecc494 | 2021-11-03T08:05:33.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | Barleysack | null | Barleysack/klue-roberta-LSTM | 1 | null | transformers | 27,820 | Entry not found |
Bharathdamu/wav2vec2-large-xls-r-300m-hindi | 682e904c89fad4074e33b8bb8281c8ee491b7b15 | 2021-11-29T09:04:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Bharathdamu | null | Bharathdamu/wav2vec2-large-xls-r-300m-hindi | 1 | null | transformers | 27,821 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
BigSalmon/BertaMyWorda | 29171ce0901d4d9fbba878a06e2e3da97da6d00d | 2021-10-06T05:17:35.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/BertaMyWorda | 1 | null | transformers | 27,822 | ``````
!pip install transformers
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = AutoModelForMaskedLM.from_pretrained("BigSalmon/BertaMyWorda")
`````` |
BigSalmon/FormalBerta | 209d81671a5102262b1bf932c506baae9b8ea6ed | 2021-10-13T04:35:11.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/FormalBerta | 1 | null | transformers | 27,823 | Entry not found |
BigSalmon/FormalBerta2 | 840f839415bb77c2c99d198c6a6a0bbf9b990d91 | 2021-10-14T03:33:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/FormalBerta2 | 1 | null | transformers | 27,824 | Entry not found |
BigSalmon/FormalRobertaa | 49b51eb8a2d6ece5e0f855d33fecb1f50f403d88 | 2021-12-02T00:19:24.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/FormalRobertaa | 1 | null | transformers | 27,825 | https://huggingface.co/spaces/BigSalmon/MASK2 |
BigSalmon/FormalRobertaaa | 410e4efc932d2ac7ecedd8410356eb82c4f82d1f | 2021-12-02T00:23:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/FormalRobertaaa | 1 | null | transformers | 27,826 | https://huggingface.co/spaces/BigSalmon/MASK2 |
BigSalmon/GPTHeHe | 38b622ccae85f84d8b4d9e41e4e7e02cfa80b139 | 2021-09-29T22:31:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPTHeHe | 1 | null | transformers | 27,827 | Entry not found |
BigSalmon/GoodMaskResults | 45e36ff82b5a46d9830909e8a7ee8081d72d3ff7 | 2021-09-23T04:25:39.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/GoodMaskResults | 1 | null | transformers | 27,828 | Entry not found |
BigSalmon/InformalToFormalLincoln15 | 7823743065db1e27c903c0664fcd307cefbd6f4e | 2021-12-22T22:40:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln15 | 1 | null | transformers | 27,829 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln15")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln15")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
The guys were ( enlisted to spearhead the cause / tasked with marshaling the movement forward / charged with driving the initiative onward / vested with the assignment of forwarding the mission)
informal english: friday should no longer be a workday, but a day added to the weekend, suffusing people with the ability to spend time with their families.
Translated into the Style of Abraham Lincoln: the weekend should come to include friday, ( broadening the window of time for one to be in the company of their family / ( multiplying / swelling / turbocharging / maximizing ) the interval for one to ( reconnect with / feel the warmth of ) their loved ones ).
informal english:
````
|
BigSalmon/InformalToFormalLincoln16 | a71662041fa245860532fa0511b54f021d934415 | 2021-12-23T18:48:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln16 | 1 | null | transformers | 27,830 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln16")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln16")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```` |
BigSalmon/InformalToFormalLincoln17 | 7414f11920a35050497cda35f3c2503719719c00 | 2021-12-29T21:25:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln17 | 1 | null | transformers | 27,831 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln17")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln17")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```` |
BigSalmon/InformalToFormalLincoln18 | 85b529cbba1d5ef6d197dcdc7432537edc879b53 | 2022-01-06T22:00:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln18 | 1 | null | transformers | 27,832 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln18")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln18")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```` |
BigSalmon/InformalToFormalLincoln20 | 901c560fb0c4d49976b5f504ae1d85514a971d97 | 2022-02-04T20:56:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln20 | 1 | null | transformers | 27,833 | Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln20")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln20")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
```
***
wordy: chancing upon a linux user is a rare occurrence in the present day.
Translate into Concise Text: present-day linux users are rare.
***
wordy: an interest in classical music is becoming more and more less popular.
Translate into Concise Text: classical music appreciation is dwindling.
Translate into Concise Text: waning interest in classic music persists.
Translate into Concise Text: interest in classic music is fading.
***
wordy: the ice cream was only one dollar, but it was not a good value for the size.
Translate into Concise Text: the one dollar ice cream was overpriced for its size.
Translate into Concise Text: overpriced, the one dollar ice cream was small.
***
wordy:
``` |
BigSalmon/InformalToFormalLincoln21 | 367bdaf1312029741ca709a73348858fac8a4976 | 2022-02-11T21:24:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln21 | 1 | null | transformers | 27,834 | Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln21")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
```
***
wordy: chancing upon a linux user is a rare occurrence in the present day.
Translate into Concise Text: present-day linux users are rare.
***
wordy: an interest in classical music is becoming more and more less popular.
Translate into Concise Text: classical music appreciation is dwindling.
Translate into Concise Text: waning interest in classic music persists.
Translate into Concise Text: interest in classic music is fading.
***
wordy: the ice cream was only one dollar, but it was not a good value for the size.
Translate into Concise Text: the one dollar ice cream was overpriced for its size.
Translate into Concise Text: overpriced, the one dollar ice cream was small.
***
wordy:
``` |
BigSalmon/Lincoln4 | ef712c89ba15c78db701c572d6711f638801973d | 2021-11-19T22:28:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/Lincoln4 | 1 | null | transformers | 27,835 | Entry not found |
BigSalmon/MrLincoln | 18a0bfa2d79602f1036d1af8d6f890152f4a312c | 2021-11-15T00:35:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln | 1 | null | transformers | 27,836 | Entry not found |
BigSalmon/MrLincoln10 | 2af0c37eb9b2ef91ba12b7b473d6755482ddcdec | 2021-11-29T22:23:11.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln10 | 1 | null | transformers | 27,837 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln10")
```
```
How To Make Prompt:
Original: freedom of the press is a check against political corruption.
Edited: fundamental to the spirit of democracy, freedom of the press is a check against political corruption.
Edited 2: ever at odds with tyranny, freedom of the press is a check against political corruption.
Edited 3: never to be neglected, freedom of the press is a check against political corruption.
Original: solar is a beacon of achievement.
Edited: central to decoupling from the perils of unsustainable energy, solar is a beacon of achievement.
Edited 2: key to a future beyond fossil fuels, solar is a beacon of achievement.
Original: milan is nevertheless ambivalent towards his costly terms.
Edited: keen on contracting him, milan is nevertheless ambivalent towards his costly terms.
Edited 2: intent on securing his services, milan is nevertheless ambivalent towards his costly terms.
Original:
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
BigSalmon/MrLincoln11 | 9905098be70f2d119b0274d8cd20a4739e5daf56 | 2021-12-01T20:17:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln11 | 1 | null | transformers | 27,838 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln11")
```
```
How To Make Prompt:
Original: freedom of the press is a check against political corruption.
Edited: fundamental to the spirit of democracy, freedom of the press is a check against political corruption.
Edited 2: ever at odds with tyranny, freedom of the press is a check against political corruption.
Edited 3: never to be neglected, freedom of the press is a check against political corruption.
Original: solar is a beacon of achievement.
Edited: central to decoupling from the perils of unsustainable energy, solar is a beacon of achievement.
Edited 2: key to a future beyond fossil fuels, solar is a beacon of achievement.
Original: milan is nevertheless ambivalent towards his costly terms.
Edited: keen on contracting him, milan is nevertheless ambivalent towards his costly terms.
Edited 2: intent on securing his services, milan is nevertheless ambivalent towards his costly terms.
Original:
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
BigSalmon/MrLincoln12 | 05234b93006a43bf411e1ba5fe617b62bdd3e2ac | 2021-12-04T21:32:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln12 | 1 | null | transformers | 27,839 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln12")
```
```
https://huggingface.co/spaces/BigSalmon/InformalToFormal
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
BigSalmon/MrLincoln13 | 7f914504e3d80459a59b13a6496e4fe4e8bb6307 | 2021-12-14T01:32:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln13 | 1 | null | transformers | 27,840 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln13")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
BigSalmon/MrLincoln2 | f730a89e12694fa96139429ffebffc2c60f24761 | 2021-11-16T20:54:17.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln2 | 1 | null | transformers | 27,841 | Entry not found |
BigSalmon/MrLincoln4 | 6bc61c12fce954306a04afcbad05030a6863fb45 | 2021-11-23T23:00:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln4 | 1 | null | transformers | 27,842 | Entry not found |
BigSalmon/MrLincoln5 | f279e5096489bc27ec1acd39176264709771209c | 2021-12-22T22:41:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln5 | 1 | null | transformers | 27,843 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln5")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` |
BigSalmon/MrLincoln6 | 97cd1a65364012ea84fcd96938f7609456564cd7 | 2021-11-29T14:42:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln6 | 1 | null | transformers | 27,844 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln6")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
BigSalmon/MrLincoln8 | a3e2f27cc4fcd9f31667958450a726885ec39ddc | 2021-11-29T14:55:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln8 | 1 | null | transformers | 27,845 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln7")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
BigSalmon/Points | 38d22fdde41077877ca2f761fc90a5840c96fa1f | 2022-02-07T00:27:49.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/Points | 1 | null | transformers | 27,846 | Converting Points to Paragraphs
Example Prompts:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
``` |
BigSalmon/Robertsy | 5bd1c196a3283279a9f03212afc7d50ddbbf13d8 | 2021-06-10T23:23:33.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/Robertsy | 1 | null | transformers | 27,847 | Entry not found |
BigSalmon/T52 | 7feb0d061c77f4a86f7dfa6a06097d73bfd9af6d | 2021-11-18T02:25:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BigSalmon | null | BigSalmon/T52 | 1 | null | transformers | 27,848 | Entry not found |
BigSalmon/T5F | 0b761e842a302001d2fbf3bb2072bf75d4588290 | 2021-11-17T22:52:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BigSalmon | null | BigSalmon/T5F | 1 | null | transformers | 27,849 | Entry not found |
BigSalmon/T5Salmon | 6734769687dca59d02de46ab195fe1c85b4a15d2 | 2021-06-23T02:19:07.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BigSalmon | null | BigSalmon/T5Salmon | 1 | null | transformers | 27,850 | Entry not found |
BigSalmon/prepositions | 5ae36ad59ba307d0d1c6beeb5d3e20aea64eaeb0 | 2021-09-15T00:46:16.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/prepositions | 1 | null | transformers | 27,851 | Entry not found |
BinksSachary/ShaxxBot | 4ff0ad95ff0c6ccafb67a91a3d489efd12ee211f | 2021-06-03T04:51:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BinksSachary | null | BinksSachary/ShaxxBot | 1 | null | transformers | 27,852 | ---
tags:
- conversational
---
# My Awesome Model |
Blackmist786/DialoGPt-small-transformers4 | 5c2241b486faa415314bef992e993b09437e600a | 2021-08-30T13:57:38.000Z | [
"pytorch"
] | null | false | Blackmist786 | null | Blackmist786/DialoGPt-small-transformers4 | 1 | null | null | 27,853 | Entry not found |
BlueGamerBeast/DialoGPT-small-Morgana | 8060de6f4b30c54cb2cae93797efc43f85834eae | 2021-08-27T17:03:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BlueGamerBeast | null | BlueGamerBeast/DialoGPT-small-Morgana | 1 | null | transformers | 27,854 | ---
tags:
- conversational
---
# Moragna DialoGPT Model |
BogdanKuloren/distilbert-base-uncased-finetuned-ner | d4cbba0678a6e4946c83e37b3de862699498617c | 2021-12-01T17:11:31.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | BogdanKuloren | null | BogdanKuloren/distilbert-base-uncased-finetuned-ner | 1 | null | transformers | 27,855 | Entry not found |
Broadus20/DialoGPT-small-harrypotter | 71ed21e5f3351d5f333cf2fcad8add26aeacfd08 | 2021-10-26T20:30:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Broadus20 | null | Broadus20/DialoGPT-small-harrypotter | 1 | null | transformers | 27,856 | Entry not found |
Broadus20/DialoGPT-small-joshua | 435d8f86a640f6b643fba999c2ed4d083ac02f88 | 2021-10-26T20:26:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Broadus20 | null | Broadus20/DialoGPT-small-joshua | 1 | null | transformers | 27,857 | ---
tags:
conversational
---
#Harry Potter DialoGPT Model |
Brykee/DialoGPT-medium-Morty | f145517c34e47704a482c8bd300428d734b506e7 | 2021-10-20T19:24:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Brykee | null | Brykee/DialoGPT-medium-Morty | 1 | null | transformers | 27,858 | ---
tags:
- conversational
---
# Morty DialoGPT Model |
Bubb-les/DisloGPT-medium-HarryPotter | f54793e6a6668de215b9369392cd21df925843a8 | 2021-09-03T05:23:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Bubb-les | null | Bubb-les/DisloGPT-medium-HarryPotter | 1 | null | transformers | 27,859 | ---
tags:
- conversational
---
# Harry Potter speech |
CLAck/en-km | 6cd52736ad23ade8fa532d515fc5ad2ca4548f5d | 2022-02-15T11:26:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
] | translation | false | CLAck | null | CLAck/en-km | 1 | null | transformers | 27,860 | ---
tags:
- translation
---
This model translate from English to Khmer.
It is the pure fine-tuned version of MarianMT model en-zh.
This is the result after 30 epochs of pure fine-tuning of khmer language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/en-km")
tokenizer = AutoTokenizer.from_pretrained("CLAck/en-km")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2khm>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2khm> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
``` |
Canadiancaleb/DialoGPT-small-jesse | 94e601e5b9a95ecaa41f5b95913dd845842ca636 | 2021-09-19T00:05:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Canadiancaleb | null | Canadiancaleb/DialoGPT-small-jesse | 1 | null | transformers | 27,861 | ---
tags:
- conversational
---
# Jesse (Breaking Bad) DialoGPT Model |
CasualHomie/DialoGPT-small-harrypotter | 3e35e923bc5bc11cced9e44895e02588a1705d66 | 2022-01-10T09:30:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CasualHomie | null | CasualHomie/DialoGPT-small-harrypotter | 1 | null | transformers | 27,862 | ---
tags:
- conversational
---
# Harry potter DialoGPT Model |
CenIA/albert-base-spanish-finetuned-ner | 733d7b561db7fd3fe3a9f52c94363d480716154f | 2021-12-28T20:57:49.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-base-spanish-finetuned-ner | 1 | null | transformers | 27,863 | Entry not found |
CenIA/albert-base-spanish-finetuned-pos | ceeb921fcd3d2c0c04f52af0e40b988014f06e37 | 2021-12-17T18:07:52.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-base-spanish-finetuned-pos | 1 | null | transformers | 27,864 | Entry not found |
CenIA/albert-large-spanish-finetuned-ner | 0733d4df05327822bf691cb0e61546b59fd0937d | 2021-12-29T16:36:20.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-large-spanish-finetuned-ner | 1 | null | transformers | 27,865 | Entry not found |
CenIA/albert-large-spanish-finetuned-qa-mlqa | 6c8d55fe60dcb6609000e0957ad5ae4fb2ea2fc2 | 2022-01-17T15:34:13.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-large-spanish-finetuned-qa-mlqa | 1 | null | transformers | 27,866 | Entry not found |
CenIA/albert-tiny-spanish-finetuned-ner | 9a0d740697992d12cadda5b78db5d5316daa91fa | 2022-01-03T14:55:55.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-ner | 1 | null | transformers | 27,867 | Entry not found |
CenIA/albert-tiny-spanish-finetuned-qa-mlqa | 71c1ef5ddc50551088d0e2cd7392bff8706d681c | 2022-01-17T03:26:58.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-qa-mlqa | 1 | null | transformers | 27,868 | Entry not found |
CenIA/albert-xlarge-spanish-finetuned-ner | d26b1aee64353894ec742e05e45d47ff92de34a3 | 2021-12-29T18:24:18.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-xlarge-spanish-finetuned-ner | 1 | null | transformers | 27,869 | Entry not found |
CenIA/albert-xlarge-spanish-finetuned-pos | 118561072686f9d1777a1257c2a5739d19cfd0f9 | 2021-12-17T22:14:47.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-xlarge-spanish-finetuned-pos | 1 | null | transformers | 27,870 | Entry not found |
CenIA/albert-xxlarge-spanish-finetuned-ner | dec6b413a6a99a54a0be85f451aad9dd72c26851 | 2022-01-12T20:24:58.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-ner | 1 | null | transformers | 27,871 | Entry not found |
CenIA/albert-xxlarge-spanish-finetuned-pos | bfd5f4cf0b0e3e047c3224c3fc5282d20b603d90 | 2021-12-17T22:47:32.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-pos | 1 | null | transformers | 27,872 | Entry not found |
CenIA/bert-base-spanish-wwm-cased-finetuned-qa-mlqa | 6072f94a08a41de665a1ca3ef01ab00f5b4b10cd | 2022-01-22T00:33:16.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/bert-base-spanish-wwm-cased-finetuned-qa-mlqa | 1 | null | transformers | 27,873 | Entry not found |
CennetOguz/distilbert-base-uncased-finetuned-recipe-1 | 3a3117507fbfeee69b72897af50d7b24b0adba1d | 2022-02-21T22:10:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | CennetOguz | null | CennetOguz/distilbert-base-uncased-finetuned-recipe-1 | 1 | null | transformers | 27,874 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-recipe-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recipe-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.2689 |
| No log | 2.0 | 6 | 3.0913 |
| No log | 3.0 | 9 | 3.0641 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Chaewon/mnmt_decoder_en | 72887178479f9ebf36f393d63e51691f806d8ee8 | 2021-12-13T03:22:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Chaewon | null | Chaewon/mnmt_decoder_en | 1 | null | transformers | 27,875 | Entry not found |
Chakita/Friends | 56eb4c44ee3f5a478038776eb8df6c907e35007c | 2021-06-04T10:36:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Chakita | null | Chakita/Friends | 1 | null | transformers | 27,876 | ---
tags:
- conversational
---
# Model trained on F.R.I.E.N.D.S dialogue |
Chakita/KNUBert | 680cb49497b527852a1629ccb2743ad008af1271 | 2022-02-04T10:19:51.000Z | [
"pytorch",
"roberta",
"fill-mask",
"kn",
"dataset:custom data set of Kannada news",
"transformers",
"Masked Language model",
"Autocomplete",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | Chakita | null | Chakita/KNUBert | 1 | null | transformers | 27,877 | Kannada BERT model finetuned on a news corpus
---
language:
- kn
thumbnail:
tags:
- Masked Language model
- Autocomplete
license: mit
datasets:
- custom data set of Kannada news
--- |
ChaseBread/DialoGPT-small-harrypotter | 80feb14b574e14ed722263659a27a82b24274a3a | 2021-11-20T02:11:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ChaseBread | null | ChaseBread/DialoGPT-small-harrypotter | 1 | null | transformers | 27,878 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Chun/DialoGPT-medium-dailydialog | 6ea990d592dac75138fa688e6f9687bb0bf760d7 | 2021-08-06T14:11:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Chun | null | Chun/DialoGPT-medium-dailydialog | 1 | null | transformers | 27,879 | Entry not found |
Chun/w-zh2en-hsk | 91337154cf64e358df78b5cd3beefbb81f7aa272 | 2021-08-25T13:16:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Chun | null | Chun/w-zh2en-hsk | 1 | null | transformers | 27,880 | Entry not found |
Chun/w-zh2en-mto | 3c7031a13908fbe7924cc2a738872abbfbb07086 | 2021-08-25T10:15:30.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Chun | null | Chun/w-zh2en-mto | 1 | null | transformers | 27,881 | Entry not found |
CodeDanCode/CartmenBot | 92fadcf2694e035959a78a8c1f3ef95223cccc17 | 2021-10-26T12:37:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CodeDanCode | null | CodeDanCode/CartmenBot | 1 | null | transformers | 27,882 | ---
tags:
- conversational
---
# Cartman DialoGPT Model |
CoderBoy432/DialoGPT-small-harrypotter | 62c7598f45cf74ea7d0f7f865054550072ec1344 | 2022-01-27T11:11:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CoderBoy432 | null | CoderBoy432/DialoGPT-small-harrypotter | 1 | null | transformers | 27,883 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
CoderEFE/DialoGPT-medium-marx | 895b4987feee4333c9ab2d85795d0d059287e954 | 2021-06-05T07:08:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | CoderEFE | null | CoderEFE/DialoGPT-medium-marx | 1 | null | transformers | 27,884 | |
CoffeeAddict93/gpt2-medium-call-of-the-wild | 63d04437e70b710a70cc27ef5cd20a55d2e30b22 | 2021-12-02T02:09:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | CoffeeAddict93 | null | CoffeeAddict93/gpt2-medium-call-of-the-wild | 1 | null | transformers | 27,885 | Entry not found |
Coldestadam/Breakout_Mentors_SpongeBob_Model | 7d6ffb51cc99b247810054cf633e5baed74778a2 | 2021-07-13T05:27:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Coldestadam | null | Coldestadam/Breakout_Mentors_SpongeBob_Model | 1 | null | transformers | 27,886 | Entry not found |
Connorvr/TeachingGen | c34a9c13913993a68d14126db7630e4e6a6123f8 | 2022-03-17T00:14:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Connorvr | null | Connorvr/TeachingGen | 1 | null | transformers | 27,887 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Corvus/DialoGPT-medium-CaptainPrice-Extended | 3a4557796e324d3e1c2ff290cb37d04376619de4 | 2021-09-20T18:08:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Corvus | null | Corvus/DialoGPT-medium-CaptainPrice-Extended | 1 | null | transformers | 27,888 | ---
tags:
- conversational
---
#DiabloGPT Captain Price (Extended) |
Coyotl/DialoGPT-test2-arthurmorgan | d31c6ca6b65d8d8aa1b12d2f09a69493d4a51e7e | 2021-08-27T23:00:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Coyotl | null | Coyotl/DialoGPT-test2-arthurmorgan | 1 | null | transformers | 27,889 | ---
tags:
- conversational
---
# Arthur Morgan DialoGPT Model |
Culmenus/IceBERT-finetuned-ner | 70239ed548efc4609997a01a3b91ab39450ac51b | 2021-10-01T15:49:45.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Culmenus | null | Culmenus/IceBERT-finetuned-ner | 1 | null | transformers | 27,890 | ---
license: gpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IceBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8927335640138409
- name: Recall
type: recall
value: 0.8631855657784682
- name: F1
type: f1
value: 0.8777109531620194
- name: Accuracy
type: accuracy
value: 0.9849836396073506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0807
- Precision: 0.8927
- Recall: 0.8632
- F1: 0.8777
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0544 | 1.0 | 2904 | 0.0774 | 0.8859 | 0.8490 | 0.8670 | 0.9833 |
| 0.0284 | 2.0 | 5808 | 0.0781 | 0.8709 | 0.8590 | 0.8649 | 0.9840 |
| 0.0166 | 3.0 | 8712 | 0.0807 | 0.8927 | 0.8632 | 0.8777 | 0.9850 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Culmenus/XLMR-ENIS-finetuned-ner | 915ac69556f4a5f299c97fe8868884ec4b5ed0c0 | 2021-10-01T17:23:19.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Culmenus | null | Culmenus/XLMR-ENIS-finetuned-ner | 1 | null | transformers | 27,891 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8803619696791632
- name: Recall
type: recall
value: 0.8517339397384878
- name: F1
type: f1
value: 0.8658113730929264
- name: Accuracy
type: accuracy
value: 0.9837103244207861
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0891
- Precision: 0.8804
- Recall: 0.8517
- F1: 0.8658
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0573 | 1.0 | 2904 | 0.1024 | 0.8608 | 0.8003 | 0.8295 | 0.9799 |
| 0.0307 | 2.0 | 5808 | 0.0899 | 0.8707 | 0.8380 | 0.8540 | 0.9825 |
| 0.0198 | 3.0 | 8712 | 0.0891 | 0.8804 | 0.8517 | 0.8658 | 0.9837 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
DARKVIP3R/DialoGPT-medium-Anakin | ffdb1c1b3d7b61a95d6aa2db1101ed0d53513dd9 | 2022-01-27T01:59:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | DARKVIP3R | null | DARKVIP3R/DialoGPT-medium-Anakin | 1 | null | transformers | 27,892 | ---
tags:
- conversational
---
# Anakin Skywalker DialoGPT Model |
DarkWolf/kn-electra-small | 788af312b7f1e5e3991cb27f54941b2a6b70ed41 | 2021-07-02T15:59:13.000Z | [
"pytorch",
"electra",
"feature-extraction",
"transformers"
] | feature-extraction | false | DarkWolf | null | DarkWolf/kn-electra-small | 1 | null | transformers | 27,893 | Entry not found |
Davlan/byt5-base-eng-yor-mt | 3bde708dd00ec8d66b66ecc5a4ab5aca6ade3417 | 2021-08-08T21:58:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Davlan | null | Davlan/byt5-base-eng-yor-mt | 1 | 1 | transformers | 27,894 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# byt5-base-eng-yor-mt
## Model description
**byt5-base-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning byt5-base achieves **12.23 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-luganda | ac8733bcd2a24ad1b1065296dc5768e5c5d07d6e | 2021-06-17T17:25:57.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"lg",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-luganda | 1 | 1 | transformers | 27,895 | Hugging Face's logo
---
language: lg
datasets:
---
# xlm-roberta-base-finetuned-luganda
## Model description
**xlm-roberta-base-finetuned-luganda** is a **Luganda RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Luganda language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Luganda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-luganda')
>>> unmasker("Ffe tulwanyisa abo abaagala okutabangula <mask>, Kimuli bwe yategeezezza.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BUKKEDDE](https://github.com/masakhane-io/masakhane-ner/tree/main/text_by_language/luganda) +[Luganda CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | lg_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 79.69 | 84.70
### BibTeX entry and citation info
By David Adelani
```
```
|
DeadBeast/roberta-base-pretrained-mr-2 | 465c6f261bcf84ff4366cd43003c380646882cd1 | 2021-08-29T19:23:16.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DeadBeast | null | DeadBeast/roberta-base-pretrained-mr-2 | 1 | null | transformers | 27,896 | Entry not found |
Declan/Breitbart_model_v1 | 48c8bfc01b2c789bd559c70e2506c88fc8908c3e | 2021-12-11T23:39:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v1 | 1 | null | transformers | 27,897 | Entry not found |
Declan/Breitbart_model_v3 | 44f15231e118a135e3b71a354465e0cdd16c31ef | 2021-12-15T05:45:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v3 | 1 | null | transformers | 27,898 | Entry not found |
Declan/Breitbart_model_v4 | adfee34c5b0597e0c4d270c3f8acb5a04e231794 | 2021-12-15T06:14:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v4 | 1 | null | transformers | 27,899 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.