modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
โ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
โ | likes
float64 0
712
โ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kompactss/JeBERT_ko_je | 5e46846eb3efd409d9850a6c22f9ab9447ffb6ae | 2022-05-16T06:11:24.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | kompactss | null | kompactss/JeBERT_ko_je | 0 | null | transformers | 37,200 | ---
license: afl-3.0
---
# ๐ ์ ์ฃผ ๋ฐฉ์ธ ๋ฒ์ญ ๋ชจ๋ธ ๐
- ํ์ค์ด -> ์ ์ฃผ์ด
- Made by. ๊ตฌ๋ฆ ์์ฐ์ด์ฒ๋ฆฌ ๊ณผ์ 3๊ธฐ 3์กฐ!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+์๋์ ๋ฌธ์)
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 7 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+์๋์ ๋ฌธ์) Dataset : 67.3
---
### CREDIT
- ์ฃผํ์ค : [email protected]
- ๊ฐ๊ฐ๋ : [email protected]
- ๊ณ ๊ด์ฐ : [email protected]
- ๊น์์ฐ : [email protected]
- ์ด์๊ฒฝ : [email protected]
- ์กฐ์ฑ์ : [email protected] |
jcai1/distilbert-base-uncased-finetuned-imdb-accelerate | a0d920ff49d404d0e92c68938b0ef697084de1d8 | 2022-05-01T15:26:09.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jcai1 | null | jcai1/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | 37,201 | Entry not found |
hassnain/wav2vec2-base-timit-demo-colab57 | a2f19cd694a4f732b48f0432570ec4bf93024243 | 2022-05-01T18:17:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab57 | 0 | null | transformers | 37,202 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab57
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7328
- Wer: 0.4593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9876 | 7.04 | 500 | 3.1483 | 1.0 |
| 1.4621 | 14.08 | 1000 | 0.6960 | 0.6037 |
| 0.4404 | 21.13 | 1500 | 0.6392 | 0.5630 |
| 0.2499 | 28.17 | 2000 | 0.6738 | 0.5281 |
| 0.1732 | 35.21 | 2500 | 0.6789 | 0.4952 |
| 0.1347 | 42.25 | 3000 | 0.7328 | 0.4835 |
| 0.1044 | 49.3 | 3500 | 0.7258 | 0.4840 |
| 0.0896 | 56.34 | 4000 | 0.7328 | 0.4593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
rjuez00/meddocan-flair-spanish-fast-bilstm-crf | a354058108ca3c5cfce208815bc9e1763a52fe51 | 2022-05-03T14:19:44.000Z | [
"pytorch"
] | null | false | rjuez00 | null | rjuez00/meddocan-flair-spanish-fast-bilstm-crf | 0 | null | null | 37,203 | The [MEDDOCAN dataset](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN) has some entities not separated by a space but a dot. For example such is the case of Alicante.Villajoyosa which are two separate entities but with traditional tokenizers are only one Token. Spacy tokenizers also don't work, when I was trying to assign the entities two the tokens on training SpaCy v3 frecuently reported errors that it could not match some entities to tokens due to this problem.
That is why I have created a Tokenizer with manual regex rules so that it improves the performance when using the model:
```
from flair.models import SequenceTagger
from flair.data import Sentence
from flair.data import Tokenizer
import re
class CustomTokenizer(Tokenizer):
def tokenize(self, text):
finaltokens = []
tokens = text.split()
for token in tokens:
for i in list(filter(None, re.split("-|\/" , token))):
if len(re.findall("(\w)\.(\w)", i)) > 0:
#print(i)
for j in filter(None, i.split(".")):
finaltokens.append(j)
else:
#print(i)
finaltokens.append(i)
#print(finaltokens)
return finaltokens
flairTagger = SequenceTagger.load("rjuez00/meddocan-flair-spanish-fast-bilstm-crf")
```
For using the model you just have to instanciate it like above and then create a Flair Sentence with the text and the tokenizer like this:
```documentFlair = Sentence(text, use_tokenizer = CustomTokenizer())```
Unfortunately the spans that Flair provides while performing NER on the MEDDOCAN dataset are not correct, I'm not aware if its a bug of my version (0.11). But I've developed a system that corrects the slight deviations of the offsets.
```
documentEntities = []
documentFlair = Sentence(text, use_tokenizer = CustomTokenizer())
flairTagger.predict(documentFlair)
predictedEntities = []
for idxentity, entity in enumerate(documentFlair.get_spans("ner")):
predictedEntities.append(entity)
```
```
for idxentity, entity in enumerate(reversed(predictedEntities), start = 1):
entityType = entity.get_label("ner").value
startEntity = entity.start_position
endEntity = entity.end_position
while text[startEntity] in [" ", "(", ")", ",", ".", ";", ":", "!", "?", "-", "\n"]:
startEntity += 1
while len(text) > endEntity and (text[endEntity].isalpha() or text[endEntity].isnumeric()):
#print("ALARGADO FINAL")
endEntity += 1
while text[endEntity-1] in [" ", ",", ".", ";", ":", "!", "?", "-", ")", "(", "\\", "/", "\"", "'", "+", "*", "&", "%", "$", "#", "@", "~", "`", "^", "|", "=", ":", ";", ">", "<", "]"]:
endEntity -= 1
#print(f"PREDICHO:{entity.text}\t\t\t\tARREGLADO:{text[startEntity:endEntity]}\n")
f.write( "T" + str(idxentity) + "\t"
+ entityType + " " + str(startEntity) + " " + str(endEntity) +
"\t" + text[startEntity:endEntity] + "\n" )
``` |
hassnain/wav2vec2-base-timit-demo-colab240 | 744056244f1d5484922e1193f3a4fb26c25f5dbe | 2022-05-02T12:31:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab240 | 0 | null | transformers | 37,204 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab240
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab240
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6367
- eval_wer: 0.5855
- eval_runtime: 20.4889
- eval_samples_per_second: 6.931
- eval_steps_per_second: 0.879
- epoch: 14.08
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
lilitket/20220501-201151 | 78a8890c1948ba18224ab4ce3bbffdd9e1f0d5bc | 2022-05-01T21:44:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220501-201151 | 0 | null | transformers | 37,205 | Entry not found |
hassnain/wav2vec2-base-timit-demo-colab66 | 5f367e2f9f9fbba0a30b922d0ce77d2d8eb5f93a | 2022-05-02T00:14:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab66 | 0 | null | transformers | 37,206 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab66
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2675
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.3521 | 7.04 | 500 | 3.3666 | 1.0 |
| 3.1768 | 14.08 | 1000 | 3.3977 | 1.0 |
| 3.1576 | 21.13 | 1500 | 3.2332 | 1.0 |
| 3.1509 | 28.17 | 2000 | 3.2686 | 1.0 |
| 3.149 | 35.21 | 2500 | 3.2550 | 1.0 |
| 3.1478 | 42.25 | 3000 | 3.2689 | 1.0 |
| 3.1444 | 49.3 | 3500 | 3.2848 | 1.0 |
| 3.1442 | 56.34 | 4000 | 3.2675 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sherry7144/wav2vec2-base-timit-demo-colab2 | 61976a258bbe025480b547ff422fa0b3b7305594 | 2022-05-01T23:51:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sherry7144 | null | sherry7144/wav2vec2-base-timit-demo-colab2 | 0 | null | transformers | 37,207 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7746
- Wer: 0.5855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1452 | 13.89 | 500 | 2.9679 | 1.0 |
| 1.075 | 27.78 | 1000 | 0.7746 | 0.5855 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
vinaykudari/pega-acled-t2s | 69e1d146e140fc4babffa29c61cde4bc17cdd96f | 2022-05-02T00:47:23.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vinaykudari | null | vinaykudari/pega-acled-t2s | 0 | null | transformers | 37,208 | Entry not found |
inhee/opus-mt-ko-en-finetuned-ko-to-en5 | d86cf350b94af8177a971e5d03dbe53f14343a22 | 2022-05-02T08:38:56.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | inhee | null | inhee/opus-mt-ko-en-finetuned-ko-to-en5 | 0 | null | transformers | 37,209 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en5
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1434
- Bleu: 52.6052
- Gen Len: 8.1982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 105 | 1.8436 | 35.225 | 8.1735 |
| No log | 2.0 | 210 | 1.4106 | 44.7159 | 8.1923 |
| No log | 3.0 | 315 | 1.2410 | 49.5117 | 8.2165 |
| No log | 4.0 | 420 | 1.1661 | 51.8883 | 8.201 |
| 1.8123 | 5.0 | 525 | 1.1434 | 52.6052 | 8.1982 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilitket/20220502-070735 | de26eab8f530945a79e10fd4ce96a62f782b22f7 | 2022-05-02T10:13:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220502-070735 | 0 | null | transformers | 37,210 | Entry not found |
lilitket/20220502-085955 | 286a496fd5ac96654c219480e2acd9d6b0f63b7f | 2022-05-02T10:33:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220502-085955 | 0 | null | transformers | 37,211 | Entry not found |
nanopass/emotion_test | 93e7cc1a1843ebbbb5877c9760d856bf62157069 | 2022-05-02T10:05:50.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | nanopass | null | nanopass/emotion_test | 0 | null | transformers | 37,212 | Entry not found |
mechanicpanic/bart_github-typo | 8a328e8c89d6aaa73d01b5c45b5c091beaad4c59 | 2022-05-02T11:51:44.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mechanicpanic | null | mechanicpanic/bart_github-typo | 0 | null | transformers | 37,213 | Entry not found |
hassnain/wav2vec2-base-timit-demo-colab971 | b4a560ae60d0736a62c169c2b8fcc417e55244eb | 2022-05-02T14:40:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab971 | 0 | null | transformers | 37,214 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab971
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab971
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6551
- Wer: 0.4448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9461 | 1.77 | 500 | 3.2175 | 1.0 |
| 2.5387 | 3.53 | 1000 | 1.2239 | 0.7851 |
| 0.9632 | 5.3 | 1500 | 0.7275 | 0.6352 |
| 0.6585 | 7.07 | 2000 | 0.6218 | 0.5896 |
| 0.4875 | 8.83 | 2500 | 0.5670 | 0.5651 |
| 0.397 | 10.6 | 3000 | 0.5796 | 0.5487 |
| 0.3298 | 12.37 | 3500 | 0.5870 | 0.5322 |
| 0.2816 | 14.13 | 4000 | 0.5796 | 0.5016 |
| 0.2396 | 15.9 | 4500 | 0.5956 | 0.5040 |
| 0.2019 | 17.67 | 5000 | 0.5911 | 0.4847 |
| 0.1845 | 19.43 | 5500 | 0.6050 | 0.4800 |
| 0.1637 | 21.2 | 6000 | 0.6518 | 0.4927 |
| 0.1428 | 22.97 | 6500 | 0.6247 | 0.4645 |
| 0.1319 | 24.73 | 7000 | 0.6592 | 0.4711 |
| 0.1229 | 26.5 | 7500 | 0.6526 | 0.4556 |
| 0.1111 | 28.27 | 8000 | 0.6551 | 0.4448 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
fahadtouseef/wav2vec2-base-timit-demo-colab_2 | 6d34a47e66955344b4019046bc692ae11533179e | 2022-05-02T14:18:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | fahadtouseef | null | fahadtouseef/wav2vec2-base-timit-demo-colab_2 | 0 | null | transformers | 37,215 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab_2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3801
- Wer: 0.3035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7227 | 3.52 | 500 | 2.6961 | 1.0 |
| 1.1237 | 7.04 | 1000 | 0.6088 | 0.5315 |
| 0.4886 | 10.56 | 1500 | 0.4709 | 0.4353 |
| 0.3148 | 14.08 | 2000 | 0.4341 | 0.3942 |
| 0.2229 | 17.61 | 2500 | 0.4035 | 0.3616 |
| 0.1693 | 21.13 | 3000 | 0.3868 | 0.3289 |
| 0.1393 | 24.65 | 3500 | 0.3993 | 0.3135 |
| 0.118 | 28.17 | 4000 | 0.3801 | 0.3035 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab3000 | d8f73efac93ebe1b0465daea588c3e49faca660b | 2022-05-02T17:34:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab3000 | 0 | null | transformers | 37,216 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3000
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6852
- eval_wer: 0.3845
- eval_runtime: 71.297
- eval_samples_per_second: 9.846
- eval_steps_per_second: 1.234
- epoch: 24.22
- step: 8500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
niprestige/GPT-small-DusabeBot | 39ea423ae0a93028b952aeede07eeb2adec9dc3b | 2022-05-03T07:58:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | niprestige | null | niprestige/GPT-small-DusabeBot | 0 | null | transformers | 37,217 | ---
tags:
- conversational
---
# Umutoni DialoGPT Model |
fahadtouseef/wav2vec2-base-timit-demo-colab_3 | 6e59bc2d15554b10404e8ddc44ce4eb648608071 | 2022-05-02T17:56:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | fahadtouseef | null | fahadtouseef/wav2vec2-base-timit-demo-colab_3 | 0 | null | transformers | 37,218 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab_3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1942
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.2975 | 3.52 | 500 | 3.1771 | 1.0 |
| 3.1468 | 7.04 | 1000 | 3.1917 | 1.0 |
| 3.147 | 10.56 | 1500 | 3.1784 | 1.0 |
| 3.1467 | 14.08 | 2000 | 3.1850 | 1.0 |
| 3.1446 | 17.61 | 2500 | 3.2022 | 1.0 |
| 3.1445 | 21.13 | 3000 | 3.2196 | 1.0 |
| 3.1445 | 24.65 | 3500 | 3.2003 | 1.0 |
| 3.1443 | 28.17 | 4000 | 3.1942 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
imumtozee/DA-ctrl-bot | 6f403cb2b2cfe22617739da156f935caa3e5e9d3 | 2022-05-02T16:56:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | imumtozee | null | imumtozee/DA-ctrl-bot | 0 | null | transformers | 37,219 | Entry not found |
huggingtweets/wliiyum | 7c0129b3c33c94abce5fd4c24776193662cea8e1 | 2022-05-02T17:02:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wliiyum | 0 | null | transformers | 37,220 | ---
language: en
thumbnail: http://www.huggingtweets.com/wliiyum/1651510930825/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1508957108892581889/eKjVqH0A_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">will i am</div>
<div style="text-align: center; font-size: 14px;">@wliiyum</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from will i am.
| Data | will i am |
| --- | --- |
| Tweets downloaded | 3095 |
| Retweets | 1040 |
| Short tweets | 582 |
| Tweets kept | 1473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yz2d32iv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wliiyum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2z4gwg3s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2z4gwg3s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wliiyum')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lilitket/20220502-170601 | d2c4aa35171c2c146e3b91d0c55e0148c5c4dbe5 | 2022-05-02T20:14:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220502-170601 | 0 | null | transformers | 37,221 | Entry not found |
roshantushar/wav2vec2-base-timit-demo-colab | 42e0a4629f30ceaf6f3dcbf6a66722f5f7f4b9a8 | 2022-05-07T05:33:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | roshantushar | null | roshantushar/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 37,222 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
kompactss/JeBERT_ko_je_v2 | e3dec1859734b0c8f2dddbfcd9af245dfb7c26bb | 2022-05-16T06:10:50.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | kompactss | null | kompactss/JeBERT_ko_je_v2 | 0 | 0 | transformers | 37,223 | ---
license: afl-3.0
---
# ๐ ์ ์ฃผ ๋ฐฉ์ธ ๋ฒ์ญ ๋ชจ๋ธ ๐
- ํ์ค์ด -> ์ ์ฃผ์ด
- Made by. ๊ตฌ๋ฆ ์์ฐ์ด์ฒ๋ฆฌ ๊ณผ์ 3๊ธฐ 3์กฐ!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+์๋์ ๋ฌธ์)_v2
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 7 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+์๋์ ๋ฌธ์) Dataset : 67.6
---
### CREDIT
- ์ฃผํ์ค : [email protected]
- ๊ฐ๊ฐ๋ : [email protected]
- ๊ณ ๊ด์ฐ : [email protected]
- ๊น์์ฐ : [email protected]
- ์ด์๊ฒฝ : [email protected]
- ์กฐ์ฑ์ : [email protected] |
huggingtweets/hot_domme | 341b19448bc034b3d31249fe5fed0f2891e39653 | 2022-05-09T02:29:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/hot_domme | 0 | null | transformers | 37,224 | ---
language: en
thumbnail: http://www.huggingtweets.com/hot_domme/1652063339945/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445280995175911425/JkWNc3mK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">โขSTREET DON ๐ฅฌโ๐ฆุบุนุชุณ ุฏุชุนุฏ๐ฆโ Steamin Hot</div>
<div style="text-align: center; font-size: 14px;">@hot_domme</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from โขSTREET DON ๐ฅฌโ๐ฆุบุนุชุณ ุฏุชุนุฏ๐ฆโ Steamin Hot.
| Data | โขSTREET DON ๐ฅฌโ๐ฆุบุนุชุณ ุฏุชุนุฏ๐ฆโ Steamin Hot |
| --- | --- |
| Tweets downloaded | 2733 |
| Retweets | 324 |
| Short tweets | 371 |
| Tweets kept | 2038 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cv5ajux/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hot_domme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2znfpdzh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2znfpdzh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hot_domme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
maesneako/gpt2-fr_paco-cheese_e1 | 28ab567e08f9c7bc6af7c7ec3c9ee71c87cabecd | 2022-05-02T20:13:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | maesneako | null | maesneako/gpt2-fr_paco-cheese_e1 | 0 | null | transformers | 37,225 | Entry not found |
lilitket/20220502-221203 | 7a0af04ac259d85f05c8c1cada731278f4f92116 | 2022-05-02T23:45:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220502-221203 | 0 | null | transformers | 37,226 | Entry not found |
lilitket/20220502-221640 | 12ebe9a43671b0f4e5e4f567d28842a85b7c7a1c | 2022-05-02T23:51:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220502-221640 | 0 | null | transformers | 37,227 | Entry not found |
huggingtweets/usrsistakenhelp | 12d3d7a2c0471cc83d49b7ffdafa9323505ce7b0 | 2022-05-02T22:26:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/usrsistakenhelp | 0 | null | transformers | 37,228 | ---
language: en
thumbnail: http://www.huggingtweets.com/usrsistakenhelp/1651530363067/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1520487753896665088/lO1PwH2q_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rosa - I miss tgamm</div>
<div style="text-align: center; font-size: 14px;">@usrsistakenhelp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rosa - I miss tgamm.
| Data | Rosa - I miss tgamm |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 507 |
| Short tweets | 1160 |
| Tweets kept | 1577 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jxrwgo01/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usrsistakenhelp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1z4w7mpe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1z4w7mpe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/usrsistakenhelp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lilitket/20220503-001900 | b44e0f3b78d5d4deeeb50e8b2a41e8cfcd12c5f1 | 2022-05-03T02:54:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-001900 | 0 | null | transformers | 37,229 | Entry not found |
huggingtweets/alessandramakes | aeeec3ba668d09b0ca0a9ef56cd0a6741aa54c82 | 2022-05-03T01:10:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/alessandramakes | 0 | null | transformers | 37,230 | ---
language: en
thumbnail: http://www.huggingtweets.com/alessandramakes/1651540241058/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1487593747760103427/KhwkYl5P_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alessandra (Taylorโs Version)</div>
<div style="text-align: center; font-size: 14px;">@alessandramakes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alessandra (Taylorโs Version).
| Data | Alessandra (Taylorโs Version) |
| --- | --- |
| Tweets downloaded | 3156 |
| Retweets | 2020 |
| Short tweets | 279 |
| Tweets kept | 857 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xfxie26/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alessandramakes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pwrv590) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pwrv590/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alessandramakes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/irenegellar | fb7814fe0cb936e754c0f36012905deff314985c | 2022-05-03T05:26:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/irenegellar | 0 | null | transformers | 37,231 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1490143959540133891/C-DLhhNQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Random Small Streamer Chick</div>
<div style="text-align: center; font-size: 14px;">@irenegellar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Random Small Streamer Chick.
| Data | Random Small Streamer Chick |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 331 |
| Short tweets | 472 |
| Tweets kept | 2438 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ns8qkzx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @irenegellar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fvfz3ir) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fvfz3ir/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/irenegellar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tau/false_large_t5_lm_8_1024_0.15_epoch1 | 7f962c74166743f1760a1f88d9209f30f965e78b | 2022-05-03T07:29:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/false_large_t5_lm_8_1024_0.15_epoch1 | 0 | null | transformers | 37,232 | Entry not found |
lilitket/20220503-095223 | 9335f4869351f03fe9c9c37ae73469201118fed4 | 2022-05-03T10:55:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-095223 | 0 | null | transformers | 37,233 | Entry not found |
lilitket/20220503-095247 | 317928821e205404a60a93b63c75d95d55e2b5d7 | 2022-05-03T10:55:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-095247 | 0 | null | transformers | 37,234 | Entry not found |
farjvr/DialoGPT-small-Mortyfar | db6525d3163ef344aeaa0f9107c88828b60802e9 | 2022-05-10T05:45:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | farjvr | null | farjvr/DialoGPT-small-Mortyfar | 0 | null | transformers | 37,235 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
lilitket/20220503-123019 | 5ec4f685b71cafe9b28d5b5b308b550c25bb18d6 | 2022-05-03T14:04:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-123019 | 0 | null | transformers | 37,236 | Entry not found |
osanseviero/test_metrics | 7a90079a0b2bc0f8f951cda3f4e0fb55f396f1fd | 2022-05-03T13:19:53.000Z | [
"image-classification",
"pytorch",
"model-index"
] | image-classification | false | osanseviero | null | osanseviero/test_metrics | 0 | null | null | 37,237 | ---
tags:
- image-classification
- pytorch
metrics:
- accuracy
model-index:
- name: llama-horse-zebra
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
dataset:
name: HumanEval
type: openai_humaneval
- dataset:
name: HumanEval
type: openai_humaneval
metrics:
- name: pass@1
type: code_eval
value: 4
task:
name: Code Generation
type: code-generation
- dataset:
name: HumanEval
type: openai_differenty_type
metrics:
- name: pass@1
type: code_eval
value: 4
task:
name: Code Generation
type: code-generation
---
test |
masakhane/afrimbart_en_hau_news | 8bc2bf5beed59620a6978404bd3efa54a7cfc4d9 | 2022-05-03T13:06:54.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimbart_en_hau_news | 0 | null | transformers | 37,238 | ---
license: afl-3.0
---
|
masakhane/afrimbart_hau_en_news | d2cff04df6dd90553321346b4198f9accb8c7d1e | 2022-05-03T13:07:06.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimbart_hau_en_news | 0 | null | transformers | 37,239 | ---
license: afl-3.0
---
|
masakhane/afrimt5_hau_en_news | df8a4eba89da253fbd30f733c2171c6c87acc941 | 2022-05-03T13:07:03.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_hau_en_news | 0 | null | transformers | 37,240 | ---
license: afl-3.0
---
|
masakhane/afrimt5_en_hau_news | a2f28aac3a178a78c9e8139ed75e88424bc59f8c | 2022-05-03T13:07:00.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_en_hau_news | 0 | null | transformers | 37,241 | ---
license: afl-3.0
---
|
masakhane/afribyt5_en_hau_news | 049d7cfa88558ad2782e76603674f6e4d1bc40b4 | 2022-05-03T13:15:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afribyt5_en_hau_news | 0 | null | transformers | 37,242 | ---
license: afl-3.0
---
|
masakhane/afribyt5_hau_en_news | 70ebcf66fad8cad38985fb0141f30dbaf4d9cc61 | 2022-05-03T13:15:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afribyt5_hau_en_news | 0 | null | transformers | 37,243 | ---
license: afl-3.0
---
|
masakhane/byt5_hau_en_news | 62485765293258656b5c554d5f16793a25959345 | 2022-05-03T13:15:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/byt5_hau_en_news | 0 | null | transformers | 37,244 | ---
license: afl-3.0
---
|
masakhane/byt5_en_hau_news | b60d53d1a71d17ddc0e37bb786829373010480c1 | 2022-05-03T13:15:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/byt5_en_hau_news | 0 | null | transformers | 37,245 | ---
license: afl-3.0
---
|
masakhane/mt5_en_hau_news | 584bfeef283c4d7f0c5a4201099d0b41e6bdc956 | 2022-05-03T13:24:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mt5_en_hau_news | 0 | null | transformers | 37,246 | ---
license: afl-3.0
---
|
masakhane/mt5_hau_en_news | 86df7e8beeed87d00835c3862f843d883160bda8 | 2022-05-03T13:24:34.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mt5_hau_en_news | 0 | null | transformers | 37,247 | ---
license: afl-3.0
---
|
masakhane/mbart50_hau_en_news | 63ca5562427e837c8e951dac90ee8905a34f1fdd | 2022-05-03T13:24:31.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_hau_en_news | 0 | null | transformers | 37,248 | ---
license: afl-3.0
---
|
masakhane/mbart50_en_hau_news | 0ec0b84972ab53e220b659504fab988b0c154065 | 2022-05-03T13:24:36.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_en_hau_news | 0 | null | transformers | 37,249 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_hau_en_rel_news | 932b996b7da713ad3a2c43d38dd1ff6e9ae2caae | 2022-05-03T13:37:04.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_hau_en_rel_news | 0 | null | transformers | 37,250 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_hau_rel_news_ft | 78bef04acda9ac4fcf5efdbb50261bc5386529ac | 2022-05-03T13:55:11.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_hau_rel_news_ft | 0 | null | transformers | 37,251 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_hau_en_rel_news_ft | 659f1d44692ee927e015278760196ec01902c079 | 2022-05-03T13:55:20.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_hau_en_rel_news_ft | 0 | null | transformers | 37,252 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_hau_rel_ft | 81e0aaca9ad15428d4ed152512a5732526d74aaa | 2022-05-03T13:55:14.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_hau_rel_ft | 0 | null | transformers | 37,253 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_hau_rel | 0b8d598eb09084d19450def0d40b09ec395b7a68 | 2022-05-03T14:10:30.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_hau_rel | 0 | null | transformers | 37,254 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_hau_en_rel | 90348f43bc91c8bd7584dc03d990426cdb0884e5 | 2022-05-03T14:10:27.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_hau_en_rel | 0 | null | transformers | 37,255 | ---
license: afl-3.0
---
|
theojolliffe/bart-large-cnn-finetuned-roundup-4 | b37f54a307efae2bd6d7f6bdace85567ee6b2a12 | 2022-05-03T16:58:47.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-4 | 0 | null | transformers | 37,256 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-4
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2573
- Rouge1: 49.0193
- Rouge2: 28.6311
- Rougel: 31.3363
- Rougelsum: 46.1408
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3178 | 48.4526 | 28.6361 | 30.2875 | 45.4822 | 142.0 |
| No log | 2.0 | 264 | 1.2404 | 48.139 | 28.2459 | 29.3584 | 45.0785 | 142.0 |
| No log | 3.0 | 396 | 1.2389 | 49.74 | 29.7834 | 33.143 | 46.8147 | 142.0 |
| 0.9855 | 4.0 | 528 | 1.2573 | 49.0193 | 28.6311 | 31.3363 | 46.1408 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sanchit-gandhi/xtreme_s_xlsr_2_bart_covost2_fr_en_2 | f6987540b985711a5326a22b328f26fdfff2e14d | 2022-05-06T12:38:56.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:xtreme_s",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/xtreme_s_xlsr_2_bart_covost2_fr_en_2 | 0 | null | transformers | 37,257 | ---
tags:
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- bleu
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7768
- Bleu: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.5511 | 0.31 | 500 | 5.1039 | 0.0 |
| 2.2033 | 0.62 | 1000 | 4.1782 | 0.0000 |
| 1.4703 | 0.93 | 1500 | 2.8979 | 0.0000 |
| 1.6507 | 1.23 | 2000 | 2.2250 | 0.0000 |
| 1.6791 | 1.54 | 2500 | 2.0530 | 0.0000 |
| 1.4587 | 1.85 | 3000 | 1.9121 | 0.0000 |
| 1.288 | 2.16 | 3500 | 1.8705 | 0.0000 |
| 1.2244 | 2.47 | 4000 | 1.7940 | 0.0000 |
| 1.0364 | 2.78 | 4500 | 1.7768 | 0.0000 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 2.1.1.dev0
- Tokenizers 0.11.0
|
lilitket/20220503-174052 | 9f82d341970debf4c041c31406d66f2bf9fe60c4 | 2022-05-04T17:08:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-174052 | 0 | null | transformers | 37,258 | Entry not found |
theojolliffe/bart-large-cnn-finetuned-roundup-16 | d5a75b3ecefd690bb38b28dabb58b26c8971cb61 | 2022-05-03T19:21:08.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-16 | 0 | null | transformers | 37,259 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-16
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8957
- Rouge1: 49.4097
- Rouge2: 29.3516
- Rougel: 31.527
- Rougelsum: 46.4241
- Gen Len: 141.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3170 | 48.412 | 29.2017 | 31.6679 | 45.494 | 141.85 |
| No log | 2.0 | 264 | 1.2292 | 49.0133 | 29.6645 | 30.7612 | 46.1673 | 142.0 |
| No log | 3.0 | 396 | 1.2670 | 49.183 | 29.4104 | 31.573 | 46.7082 | 142.0 |
| 0.9596 | 4.0 | 528 | 1.3059 | 47.3854 | 26.6865 | 28.4666 | 44.4934 | 141.8 |
| 0.9596 | 5.0 | 660 | 1.3288 | 48.1189 | 26.9242 | 31.2938 | 45.3462 | 142.0 |
| 0.9596 | 6.0 | 792 | 1.4084 | 47.5713 | 26.7488 | 29.2959 | 45.1764 | 141.3 |
| 0.9596 | 7.0 | 924 | 1.5043 | 46.5407 | 26.0995 | 29.9007 | 43.9335 | 142.0 |
| 0.3369 | 8.0 | 1056 | 1.5115 | 49.6891 | 29.0514 | 32.33 | 46.9357 | 142.0 |
| 0.3369 | 9.0 | 1188 | 1.6131 | 47.5773 | 27.6348 | 30.5294 | 45.1151 | 142.0 |
| 0.3369 | 10.0 | 1320 | 1.6837 | 46.5699 | 26.3805 | 29.8581 | 43.5252 | 142.0 |
| 0.3369 | 11.0 | 1452 | 1.7874 | 47.1383 | 26.535 | 30.1724 | 44.2508 | 142.0 |
| 0.148 | 12.0 | 1584 | 1.7776 | 49.8061 | 30.1994 | 33.2405 | 47.6102 | 142.0 |
| 0.148 | 13.0 | 1716 | 1.8144 | 48.4451 | 28.2949 | 30.9026 | 45.6614 | 142.0 |
| 0.148 | 14.0 | 1848 | 1.8646 | 50.1964 | 30.4426 | 32.8156 | 47.4134 | 142.0 |
| 0.148 | 15.0 | 1980 | 1.8829 | 48.8129 | 29.2358 | 32.3247 | 46.2233 | 142.0 |
| 0.0726 | 16.0 | 2112 | 1.8957 | 49.4097 | 29.3516 | 31.527 | 46.4241 | 141.9 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
simonnedved/bert-seg-v2 | d154e560d2236c0237d191c570144dedc4e90383 | 2022-05-03T18:20:27.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | simonnedved | null | simonnedved/bert-seg-v2 | 0 | null | transformers | 37,260 | ---
license: apache-2.0
---
|
theojolliffe/bart-large-cnn-finetuned-roundup-32 | 8697a896a26084b0a19576e2e0b364d5604af379 | 2022-05-03T21:24:20.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-32 | 0 | null | transformers | 37,261 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-32
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2324
- Rouge1: 46.462
- Rouge2: 25.9506
- Rougel: 29.4584
- Rougelsum: 44.1863
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3139 | 48.8247 | 29.2173 | 31.7628 | 45.8992 | 142.0 |
| No log | 2.0 | 264 | 1.2287 | 47.9398 | 29.4061 | 30.9133 | 44.9142 | 140.9 |
| No log | 3.0 | 396 | 1.2676 | 49.2743 | 30.4469 | 32.8893 | 46.6208 | 142.0 |
| 0.9578 | 4.0 | 528 | 1.3218 | 47.315 | 26.7303 | 30.5007 | 44.7654 | 142.0 |
| 0.9578 | 5.0 | 660 | 1.3173 | 47.1476 | 25.9408 | 29.4257 | 44.4956 | 142.0 |
| 0.9578 | 6.0 | 792 | 1.4283 | 47.5836 | 27.1572 | 29.8553 | 44.8858 | 142.0 |
| 0.9578 | 7.0 | 924 | 1.5005 | 46.6839 | 26.2214 | 30.1895 | 43.8753 | 140.75 |
| 0.3306 | 8.0 | 1056 | 1.5316 | 47.7611 | 27.1105 | 30.8142 | 44.7598 | 142.0 |
| 0.3306 | 9.0 | 1188 | 1.6295 | 48.4416 | 27.6912 | 30.3409 | 45.317 | 142.0 |
| 0.3306 | 10.0 | 1320 | 1.6564 | 46.5751 | 27.2306 | 29.7265 | 43.7327 | 142.0 |
| 0.3306 | 11.0 | 1452 | 1.7471 | 47.9684 | 27.5739 | 30.7018 | 44.6852 | 141.75 |
| 0.145 | 12.0 | 1584 | 1.7700 | 47.9274 | 28.5129 | 31.129 | 45.1009 | 142.0 |
| 0.145 | 13.0 | 1716 | 1.8391 | 49.8091 | 30.1597 | 33.6004 | 47.2007 | 141.95 |
| 0.145 | 14.0 | 1848 | 1.9212 | 45.2195 | 25.033 | 27.4181 | 42.6161 | 142.0 |
| 0.145 | 15.0 | 1980 | 1.9267 | 48.4959 | 28.1 | 31.2796 | 46.2758 | 142.0 |
| 0.0723 | 16.0 | 2112 | 1.9130 | 47.0765 | 27.4929 | 30.6862 | 44.1458 | 142.0 |
| 0.0723 | 17.0 | 2244 | 1.9514 | 48.5354 | 28.4909 | 31.8966 | 45.7116 | 142.0 |
| 0.0723 | 18.0 | 2376 | 2.0064 | 47.9339 | 28.6862 | 32.4472 | 45.3704 | 142.0 |
| 0.042 | 19.0 | 2508 | 2.0210 | 48.3169 | 28.1579 | 30.2681 | 45.3831 | 141.3 |
| 0.042 | 20.0 | 2640 | 2.0377 | 46.8156 | 26.0122 | 28.817 | 43.9383 | 142.0 |
| 0.042 | 21.0 | 2772 | 2.0587 | 46.3813 | 27.3555 | 29.875 | 43.6605 | 142.0 |
| 0.042 | 22.0 | 2904 | 2.0695 | 45.6728 | 26.0639 | 29.5653 | 42.3772 | 142.0 |
| 0.025 | 23.0 | 3036 | 2.1617 | 46.7283 | 26.2082 | 28.52 | 43.3304 | 142.0 |
| 0.025 | 24.0 | 3168 | 2.1375 | 48.1347 | 28.3444 | 31.7509 | 45.4907 | 142.0 |
| 0.025 | 25.0 | 3300 | 2.1911 | 47.3358 | 27.1479 | 29.4923 | 44.0087 | 142.0 |
| 0.025 | 26.0 | 3432 | 2.1806 | 47.2218 | 26.8421 | 30.03 | 44.2417 | 142.0 |
| 0.0153 | 27.0 | 3564 | 2.1890 | 46.3745 | 27.0095 | 29.7274 | 43.3372 | 142.0 |
| 0.0153 | 28.0 | 3696 | 2.2235 | 50.1274 | 30.8817 | 32.8766 | 46.7486 | 141.5 |
| 0.0153 | 29.0 | 3828 | 2.2236 | 50.1785 | 30.8079 | 32.8886 | 46.9888 | 142.0 |
| 0.0153 | 30.0 | 3960 | 2.2312 | 46.7468 | 26.4272 | 30.1175 | 43.9132 | 142.0 |
| 0.0096 | 31.0 | 4092 | 2.2287 | 47.558 | 26.3933 | 29.9122 | 44.5752 | 142.0 |
| 0.0096 | 32.0 | 4224 | 2.2324 | 46.462 | 25.9506 | 29.4584 | 44.1863 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-large-cnn-finetuned-roundup-64 | 73abc7471cb3453fa47dd59eb44e81bd24a4c97a | 2022-05-04T00:41:04.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-64 | 0 | null | transformers | 37,262 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-64
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4772
- Rouge1: 46.5444
- Rouge2: 27.4056
- Rougel: 29.6779
- Rougelsum: 44.0905
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 64
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3213 | 48.3389 | 28.6641 | 31.4086 | 45.6679 | 142.0 |
| No log | 2.0 | 264 | 1.2325 | 48.798 | 29.3068 | 31.4329 | 45.7945 | 142.0 |
| No log | 3.0 | 396 | 1.2791 | 47.1449 | 27.3965 | 30.56 | 44.4704 | 142.0 |
| 0.9574 | 4.0 | 528 | 1.3134 | 46.2319 | 25.6249 | 28.7673 | 43.7555 | 140.3 |
| 0.9574 | 5.0 | 660 | 1.3187 | 46.7313 | 25.3467 | 29.3873 | 43.9495 | 142.0 |
| 0.9574 | 6.0 | 792 | 1.4271 | 48.1638 | 27.8874 | 30.5334 | 45.9944 | 142.0 |
| 0.9574 | 7.0 | 924 | 1.4876 | 46.7481 | 25.7259 | 29.7214 | 43.7042 | 140.5 |
| 0.3303 | 8.0 | 1056 | 1.5259 | 46.7075 | 26.0716 | 29.5521 | 43.7312 | 142.0 |
| 0.3303 | 9.0 | 1188 | 1.6223 | 48.012 | 27.2795 | 30.4989 | 45.4644 | 142.0 |
| 0.3303 | 10.0 | 1320 | 1.6842 | 48.0074 | 26.8831 | 29.3396 | 45.1937 | 142.0 |
| 0.3303 | 11.0 | 1452 | 1.7317 | 46.52 | 26.5152 | 29.5124 | 43.8797 | 142.0 |
| 0.1478 | 12.0 | 1584 | 1.8087 | 47.5887 | 27.0488 | 29.8569 | 44.7318 | 140.8 |
| 0.1478 | 13.0 | 1716 | 1.8263 | 46.1251 | 25.8576 | 30.1698 | 42.7228 | 142.0 |
| 0.1478 | 14.0 | 1848 | 1.9459 | 46.4034 | 25.7039 | 28.2542 | 43.7254 | 142.0 |
| 0.1478 | 15.0 | 1980 | 1.9539 | 44.4666 | 24.5827 | 27.7147 | 41.9769 | 142.0 |
| 0.0779 | 16.0 | 2112 | 1.9654 | 47.2267 | 26.4562 | 29.7352 | 44.0823 | 142.0 |
| 0.0779 | 17.0 | 2244 | 1.9580 | 48.5086 | 28.0294 | 30.8311 | 45.6336 | 142.0 |
| 0.0779 | 18.0 | 2376 | 2.0065 | 48.293 | 28.5678 | 30.0243 | 45.1384 | 142.0 |
| 0.0499 | 19.0 | 2508 | 1.9313 | 49.0549 | 28.9695 | 32.0711 | 46.3834 | 142.0 |
| 0.0499 | 20.0 | 2640 | 2.0176 | 47.0121 | 25.1606 | 29.0108 | 44.1556 | 142.0 |
| 0.0499 | 21.0 | 2772 | 2.0711 | 48.3754 | 28.2221 | 30.772 | 45.8547 | 140.95 |
| 0.0499 | 22.0 | 2904 | 2.0848 | 45.7392 | 25.254 | 29.0833 | 43.0381 | 142.0 |
| 0.0335 | 23.0 | 3036 | 2.0711 | 47.2931 | 27.4573 | 30.718 | 44.5932 | 142.0 |
| 0.0335 | 24.0 | 3168 | 2.1200 | 50.515 | 30.4253 | 33.7045 | 47.6158 | 142.0 |
| 0.0335 | 25.0 | 3300 | 2.1097 | 46.4737 | 26.3055 | 29.0148 | 43.2135 | 142.0 |
| 0.0335 | 26.0 | 3432 | 2.1695 | 46.9099 | 26.5227 | 29.7757 | 44.0613 | 142.0 |
| 0.0249 | 27.0 | 3564 | 2.1494 | 47.8319 | 27.6364 | 31.3593 | 45.065 | 141.95 |
| 0.0249 | 28.0 | 3696 | 2.1510 | 47.504 | 26.8971 | 31.7196 | 45.0328 | 142.0 |
| 0.0249 | 29.0 | 3828 | 2.1612 | 46.8789 | 27.266 | 30.1009 | 43.8248 | 142.0 |
| 0.0249 | 30.0 | 3960 | 2.1579 | 47.7012 | 27.7761 | 30.935 | 44.3686 | 142.0 |
| 0.018 | 31.0 | 4092 | 2.1981 | 48.4703 | 29.167 | 31.9815 | 45.8005 | 142.0 |
| 0.018 | 32.0 | 4224 | 2.2332 | 45.9512 | 25.8111 | 29.2467 | 42.9234 | 142.0 |
| 0.018 | 33.0 | 4356 | 2.1944 | 47.7189 | 28.1413 | 30.9692 | 44.9361 | 142.0 |
| 0.018 | 34.0 | 4488 | 2.2589 | 50.9687 | 32.3987 | 36.5644 | 48.3938 | 142.0 |
| 0.0132 | 35.0 | 4620 | 2.2269 | 47.8241 | 28.0442 | 31.5535 | 44.9394 | 142.0 |
| 0.0132 | 36.0 | 4752 | 2.2865 | 47.4383 | 27.0825 | 30.4109 | 44.194 | 142.0 |
| 0.0132 | 37.0 | 4884 | 2.3267 | 49.1786 | 29.6416 | 32.875 | 46.8821 | 142.0 |
| 0.0095 | 38.0 | 5016 | 2.2872 | 48.2085 | 28.3304 | 32.1473 | 45.3571 | 142.0 |
| 0.0095 | 39.0 | 5148 | 2.3340 | 46.6762 | 26.1637 | 29.0149 | 43.5923 | 142.0 |
| 0.0095 | 40.0 | 5280 | 2.3425 | 46.7561 | 26.1645 | 29.6337 | 43.6188 | 142.0 |
| 0.0095 | 41.0 | 5412 | 2.3111 | 49.4118 | 29.9761 | 33.4765 | 46.601 | 142.0 |
| 0.0076 | 42.0 | 5544 | 2.3892 | 45.3335 | 25.0161 | 28.4124 | 41.9873 | 142.0 |
| 0.0076 | 43.0 | 5676 | 2.3808 | 46.2506 | 26.4283 | 29.3841 | 42.7488 | 142.0 |
| 0.0076 | 44.0 | 5808 | 2.3825 | 45.6823 | 26.0048 | 29.5501 | 42.6475 | 142.0 |
| 0.0076 | 45.0 | 5940 | 2.3592 | 47.9127 | 26.7924 | 30.2353 | 44.791 | 142.0 |
| 0.0051 | 46.0 | 6072 | 2.4206 | 46.0415 | 27.0681 | 29.9602 | 43.1225 | 142.0 |
| 0.0051 | 47.0 | 6204 | 2.4214 | 48.1229 | 29.0913 | 31.1828 | 45.0022 | 142.0 |
| 0.0051 | 48.0 | 6336 | 2.4176 | 47.3825 | 27.7622 | 30.4138 | 43.9047 | 142.0 |
| 0.0051 | 49.0 | 6468 | 2.4137 | 48.2544 | 28.277 | 31.5548 | 45.6053 | 142.0 |
| 0.0041 | 50.0 | 6600 | 2.4384 | 49.6459 | 30.186 | 33.0059 | 47.0483 | 142.0 |
| 0.0041 | 51.0 | 6732 | 2.4433 | 47.7279 | 27.7857 | 30.2982 | 45.0842 | 142.0 |
| 0.0041 | 52.0 | 6864 | 2.4068 | 48.6047 | 28.1758 | 31.2744 | 45.8336 | 142.0 |
| 0.0041 | 53.0 | 6996 | 2.4362 | 48.7095 | 29.3335 | 31.9509 | 46.4161 | 142.0 |
| 0.003 | 54.0 | 7128 | 2.4307 | 48.836 | 29.6069 | 32.4004 | 46.1986 | 142.0 |
| 0.003 | 55.0 | 7260 | 2.4292 | 47.2945 | 26.7577 | 28.9719 | 43.8988 | 142.0 |
| 0.003 | 56.0 | 7392 | 2.4425 | 45.2261 | 25.6879 | 28.8129 | 42.6474 | 142.0 |
| 0.0024 | 57.0 | 7524 | 2.4386 | 47.967 | 28.5415 | 32.2049 | 45.5111 | 142.0 |
| 0.0024 | 58.0 | 7656 | 2.4528 | 47.5552 | 27.6397 | 30.9151 | 44.2627 | 142.0 |
| 0.0024 | 59.0 | 7788 | 2.4574 | 46.7821 | 27.3368 | 30.6334 | 44.0533 | 142.0 |
| 0.0024 | 60.0 | 7920 | 2.4659 | 47.3507 | 26.8371 | 30.4566 | 44.4452 | 142.0 |
| 0.0018 | 61.0 | 8052 | 2.4766 | 47.9847 | 28.2678 | 30.0664 | 45.0071 | 142.0 |
| 0.0018 | 62.0 | 8184 | 2.4682 | 46.8392 | 27.1275 | 30.144 | 43.6379 | 142.0 |
| 0.0018 | 63.0 | 8316 | 2.4754 | 45.6338 | 26.2812 | 29.4831 | 42.8744 | 142.0 |
| 0.0018 | 64.0 | 8448 | 2.4772 | 46.5444 | 27.4056 | 29.6779 | 44.0905 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/dril-nycguidovoice-senn_spud | c3050cc4a8c20962f01dc5de1d9c843821c75ea0 | 2022-05-04T01:55:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dril-nycguidovoice-senn_spud | 0 | null | transformers | 37,263 | ---
language: en
thumbnail: http://www.huggingtweets.com/dril-nycguidovoice-senn_spud/1651629321136/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503095773059244036/xof9dI-A_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1387151448203358209/HKNuKY7L_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Nick Mullen & Will Sennett</div>
<div style="text-align: center; font-size: 14px;">@dril-nycguidovoice-senn_spud</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Nick Mullen & Will Sennett.
| Data | wint | Nick Mullen | Will Sennett |
| --- | --- | --- | --- |
| Tweets downloaded | 3229 | 1007 | 3231 |
| Retweets | 486 | 71 | 314 |
| Short tweets | 300 | 41 | 631 |
| Tweets kept | 2443 | 895 | 2286 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dcek2rh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-nycguidovoice-senn_spud's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2f1xmo4s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2f1xmo4s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-nycguidovoice-senn_spud')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ybkim95/lp-bert-model | 0feb2e341932681e2f023c3933adcffa63d9d99f | 2022-05-04T06:26:12.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ybkim95 | null | ybkim95/lp-bert-model | 0 | null | sentence-transformers | 37,264 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ybkim95/lp-bert-model
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ybkim95/lp-bert-model')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ybkim95/lp-bert-model')
model = AutoModel.from_pretrained('ybkim95/lp-bert-model')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ybkim95/lp-bert-model)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 46 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ghabin/test_Huxley_Orwell | 5752de6d20a59c0e40b4833800c46757640f6b4f | 2022-05-04T10:25:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | ghabin | null | ghabin/test_Huxley_Orwell | 0 | null | transformers | 37,265 | ---
license: afl-3.0
---
|
iis2009002/xlm-roberta-base-finetuned-panx-en | 43aecee3a992b2712b94063b611c37604cd3f16f | 2022-05-12T07:08:50.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | iis2009002 | null | iis2009002/xlm-roberta-base-finetuned-panx-en | 0 | null | transformers | 37,266 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.692179700499168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
iis2009002/xlm-roberta-base-finetuned-panx-all | a5abefed94821920c943f6a4a5e5d0445877862e | 2022-05-12T07:17:40.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | iis2009002 | null | iis2009002/xlm-roberta-base-finetuned-panx-all | 0 | null | transformers | 37,267 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3 | 1.0 | 835 | 0.1862 | 0.8114 |
| 0.1552 | 2.0 | 1670 | 0.1758 | 0.8426 |
| 0.1002 | 3.0 | 2505 | 0.1752 | 0.8557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
qgdmonilla/DialoGPT-small-harrypotter | 0d536a321a118437a822431f81a5021735bf4693 | 2022-05-04T11:56:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | qgdmonilla | null | qgdmonilla/DialoGPT-small-harrypotter | 0 | null | transformers | 37,268 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
uhlenbeckmew/distilroberta-base-swift_shake | c2dec21da65f0850bcd1e3b99e74223d9c5e201e | 2022-05-04T13:25:06.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | uhlenbeckmew | null | uhlenbeckmew/distilroberta-base-swift_shake | 0 | null | transformers | 37,269 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-swift_shake
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-swift_shake
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 334 | 2.5817 |
| 2.7363 | 2.0 | 668 | 2.4499 |
| 2.4584 | 3.0 | 1002 | 2.5309 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kravchenko/uk-t5-compressed-gec | 60933636333da2f8a65f83327e497d7b4ee08804 | 2022-05-04T16:26:43.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kravchenko | null | kravchenko/uk-t5-compressed-gec | 0 | null | transformers | 37,270 | Entry not found |
simonnedved/codet5-large-v2 | 6726dc185046101a4ca46e22df26243a115a15df | 2022-05-04T16:02:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | simonnedved | null | simonnedved/codet5-large-v2 | 0 | null | transformers | 37,271 | ---
license: apache-2.0
---
|
huggingtweets/cpulisic_10-usmnt-zacksteffen_ | aad6df576654fc58a63d2d3d3a77c8896753077c | 2022-05-04T16:00:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cpulisic_10-usmnt-zacksteffen_ | 0 | null | transformers | 37,272 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511457717281607680/SuAprf1T_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & Zack Steffen & Christian Pulisic</div>
<div style="text-align: center; font-size: 14px;">@cpulisic_10-usmnt-zacksteffen_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from USMNT & Zack Steffen & Christian Pulisic.
| Data | USMNT | Zack Steffen | Christian Pulisic |
| --- | --- | --- | --- |
| Tweets downloaded | 3243 | 3120 | 1159 |
| Retweets | 599 | 869 | 629 |
| Short tweets | 215 | 523 | 93 |
| Tweets kept | 2429 | 1728 | 437 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/395einau/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cpulisic_10-usmnt-zacksteffen_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1x9olwhx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1x9olwhx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cpulisic_10-usmnt-zacksteffen_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/andrewf301 | cc477fd66db5ba0453bc33d4dce889044ae5e3e1 | 2022-05-04T16:37:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/andrewf301 | 0 | null | transformers | 37,273 | ---
language: en
thumbnail: http://www.huggingtweets.com/andrewf301/1651682241128/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484200123827580931/t1rZx1nN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Andrew</div>
<div style="text-align: center; font-size: 14px;">@andrewf301</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Andrew.
| Data | Andrew |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 1010 |
| Short tweets | 328 |
| Tweets kept | 1904 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ap29rsr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @andrewf301's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7kehh3u8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7kehh3u8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/andrewf301')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/usmnt-zacksteffen_ | 90cf59452eb38735c2d9d4835c5608fb59bde182 | 2022-05-04T17:19:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/usmnt-zacksteffen_ | 0 | null | transformers | 37,274 | ---
language: en
thumbnail: http://www.huggingtweets.com/usmnt-zacksteffen_/1651684743123/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509644465388105731/dErjQdWT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & Zack Steffen</div>
<div style="text-align: center; font-size: 14px;">@usmnt-zacksteffen_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from USMNT & Zack Steffen.
| Data | USMNT | Zack Steffen |
| --- | --- | --- |
| Tweets downloaded | 3250 | 3120 |
| Retweets | 600 | 869 |
| Short tweets | 215 | 523 |
| Tweets kept | 2435 | 1728 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34uud8si/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usmnt-zacksteffen_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wiyd3kq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/usmnt-zacksteffen_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/bart-large-cnn-finetuned-roundup-2-4 | 56082e3986f8b9b6f5761768cc36aceb681dfdbe | 2022-05-04T19:31:38.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-2-4 | 0 | null | transformers | 37,275 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-2-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-2-4
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0908
- Rouge1: 51.9961
- Rouge2: 32.3963
- Rougel: 32.1774
- Rougelsum: 50.1033
- Gen Len: 141.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 167 | 1.2152 | 52.234 | 33.1104 | 33.308 | 49.5516 | 142.0 |
| No log | 2.0 | 334 | 1.1054 | 52.7096 | 33.4698 | 33.9595 | 49.8736 | 140.3333 |
| 1.0437 | 3.0 | 501 | 1.0796 | 51.699 | 32.4255 | 34.0294 | 49.5276 | 141.7143 |
| 1.0437 | 4.0 | 668 | 1.0908 | 51.9961 | 32.3963 | 32.1774 | 50.1033 | 141.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/kanyewest-usmnt | 1a9894789a6ec215f77d0d8b0bb0c6cf3b791c86 | 2022-05-04T18:51:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/kanyewest-usmnt | 0 | null | transformers | 37,276 | ---
language: en
thumbnail: http://www.huggingtweets.com/kanyewest-usmnt/1651690314434/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1410587808666955776/mWkKWw1U_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">USMNT & ye</div>
<div style="text-align: center; font-size: 14px;">@kanyewest-usmnt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from USMNT & ye.
| Data | USMNT | ye |
| --- | --- | --- |
| Tweets downloaded | 3247 | 1858 |
| Retweets | 600 | 188 |
| Short tweets | 215 | 573 |
| Tweets kept | 2432 | 1097 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12os8ehp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kanyewest-usmnt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pwtssam) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pwtssam/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kanyewest-usmnt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
phosseini/glucose-roberta-large | 1c71e46cea65447ce81033f9b13e872565f0a357 | 2022-05-04T18:06:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | phosseini | null | phosseini/glucose-roberta-large | 0 | null | transformers | 37,277 | Entry not found |
kravchenko/uk-mt5-gec | ec4c96471d521b1a861f4f1608d6fc3385024524 | 2022-05-04T18:25:34.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | kravchenko | null | kravchenko/uk-mt5-gec | 0 | null | transformers | 37,278 | Entry not found |
theojolliffe/bart-large-cnn-finetuned-roundup-2-8 | 5ffea8acb5f94ef359813251a0c5fc3566632273 | 2022-05-05T08:30:11.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-2-8 | 0 | null | transformers | 37,279 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-2-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-2-8
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1855
- Rouge1: 53.552
- Rouge2: 34.9077
- Rougel: 38.0158
- Rougelsum: 50.7179
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 167 | 1.2085 | 50.8706 | 32.0069 | 32.9241 | 47.9805 | 142.0 |
| No log | 2.0 | 334 | 1.0897 | 53.2218 | 34.1317 | 34.4827 | 50.4795 | 139.4286 |
| 1.0256 | 3.0 | 501 | 1.0535 | 50.8882 | 30.2514 | 31.5051 | 47.9856 | 141.9048 |
| 1.0256 | 4.0 | 668 | 1.0515 | 54.9414 | 35.2309 | 36.006 | 52.0331 | 142.0 |
| 1.0256 | 5.0 | 835 | 1.0829 | 53.0709 | 33.4587 | 36.4223 | 50.1627 | 140.7619 |
| 0.4579 | 6.0 | 1002 | 1.1310 | 51.5274 | 30.7069 | 32.4146 | 48.8851 | 142.0 |
| 0.4579 | 7.0 | 1169 | 1.1670 | 52.1536 | 31.7158 | 35.7483 | 49.2678 | 142.0 |
| 0.4579 | 8.0 | 1336 | 1.1855 | 53.552 | 34.9077 | 38.0158 | 50.7179 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jjezabek/bert-base-uncased-sst | 3ec25f66388df70c0c40762d91ae5a6c759b3914 | 2022-05-04T23:12:33.000Z | [
"pytorch"
] | null | false | jjezabek | null | jjezabek/bert-base-uncased-sst | 0 | null | null | 37,280 | Entry not found |
laituan245/t5-v1_1-base-caption2smiles | 89ee91e665b6cc17af8b786e553d82afa78aa0c0 | 2022-05-05T00:23:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | laituan245 | null | laituan245/t5-v1_1-base-caption2smiles | 0 | null | transformers | 37,281 | ---
license: apache-2.0
---
|
maesneako/gpt2-maptask-GF | 960a499695dfff9d5cfbe24159798d987b808202 | 2022-05-08T08:26:23.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | maesneako | null | maesneako/gpt2-maptask-GF | 0 | null | transformers | 37,282 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-maptask-GF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-maptask-GF
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.0139 | 0.84 | 1000 | 2.9045 |
| 2.7923 | 1.67 | 2000 | 2.7825 |
| 2.6799 | 2.51 | 3000 | 2.7264 |
| 2.607 | 3.34 | 4000 | 2.6917 |
| 2.55 | 4.18 | 5000 | 2.6708 |
| 2.4988 | 5.01 | 6000 | 2.6570 |
| 2.4697 | 5.85 | 7000 | 2.6480 |
| 2.426 | 6.68 | 8000 | 2.6452 |
| 2.4031 | 7.52 | 9000 | 2.6404 |
| 2.3654 | 8.35 | 10000 | 2.6416 |
| 2.3471 | 9.19 | 11000 | 2.6418 |
| 2.3195 | 10.03 | 12000 | 2.6444 |
| 2.2969 | 10.86 | 13000 | 2.6455 |
| 2.2767 | 11.7 | 14000 | 2.6489 |
| 2.2608 | 12.53 | 15000 | 2.6525 |
| 2.2381 | 13.37 | 16000 | 2.6563 |
| 2.2228 | 14.2 | 17000 | 2.6602 |
| 2.2037 | 15.04 | 18000 | 2.6641 |
| 2.1911 | 15.87 | 19000 | 2.6684 |
| 2.1742 | 16.71 | 20000 | 2.6739 |
| 2.1626 | 17.54 | 21000 | 2.6776 |
| 2.1504 | 18.38 | 22000 | 2.6800 |
| 2.143 | 19.21 | 23000 | 2.6832 |
| 2.1277 | 20.05 | 24000 | 2.6892 |
| 2.1178 | 20.89 | 25000 | 2.6924 |
| 2.1128 | 21.72 | 26000 | 2.6952 |
| 2.1009 | 22.56 | 27000 | 2.6978 |
| 2.0957 | 23.39 | 28000 | 2.7006 |
| 2.0885 | 24.23 | 29000 | 2.7024 |
| 2.0849 | 25.06 | 30000 | 2.7065 |
| 2.0794 | 25.9 | 31000 | 2.7075 |
| 2.0783 | 26.73 | 32000 | 2.7090 |
| 2.0698 | 27.57 | 33000 | 2.7106 |
| 2.0718 | 28.4 | 34000 | 2.7109 |
| 2.069 | 29.24 | 35000 | 2.7116 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
schorndorfer/distilgpt2-finetuned-wikitext2 | caf5ec969b1c18545e21165f6e9ca0e6374f9514 | 2022-05-05T03:42:12.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | schorndorfer | null | schorndorfer/distilgpt2-finetuned-wikitext2 | 0 | null | transformers | 37,283 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.76 | 1.0 | 2334 | 3.6658 |
| 3.6526 | 2.0 | 4668 | 3.6468 |
| 3.6004 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
maesneako/gpt2-fr-space-paco-cheese | 15ac9c33a5ffdcd2755067d04eac5e2f4f47d2e6 | 2022-05-05T03:43:32.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | maesneako | null | maesneako/gpt2-fr-space-paco-cheese | 0 | null | transformers | 37,284 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-fr-space-paco-cheese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-fr-space-paco-cheese
This model is a fine-tuned version of [dbddv01/gpt2-french-small](https://huggingface.co/dbddv01/gpt2-french-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 65
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
schorndorfer/distilroberta-base-finetuned-wikitext2 | de633356ee76b24b9a081ed374184b48942b8f42 | 2022-05-05T04:09:37.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | schorndorfer | null | schorndorfer/distilroberta-base-finetuned-wikitext2 | 0 | null | transformers | 37,285 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0853 | 1.0 | 2406 | 1.9214 |
| 1.986 | 2.0 | 4812 | 1.8799 |
| 1.9568 | 3.0 | 7218 | 1.8202 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
maesneako/gpt2-fr-eos-paco-cheese | 94ce9fa568ed998ccb62f7078175d4a403231e23 | 2022-05-05T04:47:13.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | maesneako | null | maesneako/gpt2-fr-eos-paco-cheese | 0 | null | transformers | 37,286 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-fr-eos-paco-cheese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-fr-eos-paco-cheese
This model is a fine-tuned version of [dbddv01/gpt2-french-small](https://huggingface.co/dbddv01/gpt2-french-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 65
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
maesneako/gpt2-fr-space-orfeo-cid-paco-cheese | 8c0adb39a7010820fb67018037ccf08fd1f9b9f4 | 2022-05-05T08:21:37.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | maesneako | null | maesneako/gpt2-fr-space-orfeo-cid-paco-cheese | 0 | null | transformers | 37,287 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-fr-space-orfeo-cid-paco-cheese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-fr-space-orfeo-cid-paco-cheese
This model is a fine-tuned version of [dbddv01/gpt2-french-small](https://huggingface.co/dbddv01/gpt2-french-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
maesneako/gpt2-fr-eos-orfeo-cid-paco-cheese | 92013c98c9718737781bed4b3a7a1b1d256bb02e | 2022-05-05T11:08:06.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | maesneako | null | maesneako/gpt2-fr-eos-orfeo-cid-paco-cheese | 0 | null | transformers | 37,288 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-fr-eos-orfeo-cid-paco-cheese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-fr-eos-orfeo-cid-paco-cheese
This model is a fine-tuned version of [dbddv01/gpt2-french-small](https://huggingface.co/dbddv01/gpt2-french-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
masakhane/afrimbart_lug_en_news | 96d31bdae827f33c8a4116bfdd475c501501a240 | 2022-05-05T13:41:16.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimbart_lug_en_news | 0 | null | transformers | 37,289 | ---
license: afl-3.0
---
|
masakhane/afrimt5_lug_en_news | 80adfd97e331ce3cc015f52cb94bf2937dbeaa20 | 2022-05-05T13:41:20.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_lug_en_news | 0 | null | transformers | 37,290 | ---
license: afl-3.0
---
|
masakhane/afribyt5_en_lug_news | 7a6e5ffef26172398f547993b5ffcac3a15136bc | 2022-05-05T13:50:15.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afribyt5_en_lug_news | 0 | null | transformers | 37,291 | ---
license: afl-3.0
---
|
masakhane/byt5_en_lug_news | dfd8d9a72038aa5b3ec4f63db0ec26339f1ead87 | 2022-05-05T13:50:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/byt5_en_lug_news | 0 | null | transformers | 37,292 | ---
license: afl-3.0
---
|
masakhane/mt5_en_lug_news | d2b1e38224558dcec10f834030ad6657257e0b85 | 2022-05-05T14:04:25.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mt5_en_lug_news | 0 | null | transformers | 37,293 | ---
license: afl-3.0
---
|
masakhane/mbart50_lug_en_news | 3383906ddaa3b44844b3c801082e8056a9043ff8 | 2022-05-05T14:04:31.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_lug_en_news | 0 | null | transformers | 37,294 | ---
license: afl-3.0
---
|
masakhane/mbart50_en_lug_news | d774f97102a91b186633fc3811925b485bee4de6 | 2022-05-05T14:04:36.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_en_lug_news | 0 | null | transformers | 37,295 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_lug_en_rel_news_ft | 5e1ff3332af98fb73d467df7ad0d65431cd50f5c | 2022-05-05T14:23:00.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_lug_en_rel_news_ft | 0 | null | transformers | 37,296 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_lug_en_rel | e90467566fe0e054653ba884ded519aa723a5a58 | 2022-05-05T14:29:02.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_lug_en_rel | 0 | null | transformers | 37,297 | ---
license: afl-3.0
---
|
tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.15_1 | fd4a6ad7e559884a868a6f29669d91a50fe5ebb4 | 2022-05-05T14:00:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_para0_sent1_span2_itTrue_sargmax_rrFalse_8_1024_0.15_1 | 0 | null | transformers | 37,298 | Entry not found |
tau/False_large_pmi_paraNone_sentNone_span0_itTrue_sargmax_rrFalse_8_1024_0.15_1 | 07dc77b8cd4ea8e5a4bb1d7a86d47516f10e18dd | 2022-05-05T13:59:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_pmi_paraNone_sentNone_span0_itTrue_sargmax_rrFalse_8_1024_0.15_1 | 0 | null | transformers | 37,299 | Entry not found |
Subsets and Splits