modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aXhyra/sentiment_trained | d21ee2f7fdb6c14701ec0904a34ce09d94bc6416 | 2021-12-11T15:01:36.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/sentiment_trained | 9 | null | transformers | 12,200 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: sentiment_trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7253452834090693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2671
- F1: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6647 | 1.0 | 11404 | 0.6424 | 0.7189 |
| 0.6018 | 2.0 | 22808 | 0.7947 | 0.7170 |
| 0.5004 | 3.0 | 34212 | 1.0811 | 0.7200 |
| 0.3761 | 4.0 | 45616 | 1.2671 | 0.7253 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aadilhassan/Chandlerbot | 502381aa4e9b7a845f8c441e9fc659c6cff327f2 | 2021-11-28T04:36:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | aadilhassan | null | aadilhassan/Chandlerbot | 9 | 1 | transformers | 12,201 | ---
tags:
- conversational
---
# Chandler friends DialogGPT Modal |
abhishek/autonlp-bbc-roberta-37249301 | 15325f233cd756407927d35665a8e5df63d72fe8 | 2021-11-30T13:35:38.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:abhishek/autonlp-data-bbc-roberta",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | abhishek | null | abhishek/autonlp-bbc-roberta-37249301 | 9 | null | transformers | 12,202 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-bbc-roberta
co2_eq_emissions: 1.9859980179658823
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 37249301
- CO2 Emissions (in grams): 1.9859980179658823
## Validation Metrics
- Loss: 0.06406362354755402
- Accuracy: 0.9833887043189369
- Macro F1: 0.9832763664701248
- Micro F1: 0.9833887043189369
- Weighted F1: 0.9833288528828136
- Macro Precision: 0.9847257743677181
- Micro Precision: 0.9833887043189369
- Weighted Precision: 0.9835392869652073
- Macro Recall: 0.982101705176067
- Micro Recall: 0.9833887043189369
- Weighted Recall: 0.9833887043189369
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-bbc-roberta-37249301
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-bbc-roberta-37249301", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-bbc-roberta-37249301", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
adamlin/ml999_hand_planer | 517648af2a4a5ba0805bb33770cc096fbd562226 | 2021-12-20T16:52:05.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_hand_planer | 9 | null | transformers | 12,203 | Entry not found |
adamlin/ml999_metal_num | bae6b599f58ff5a9449f6e970962d68adc28fee8 | 2021-12-20T16:57:38.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_metal_num | 9 | null | transformers | 12,204 | Entry not found |
adelgasmi/autonlp-kpmg_nlp-18833547 | 87c288db1f1ba3e4eb10f43ae77abfa1f0fb1cd0 | 2021-10-15T11:44:36.000Z | [
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:adelgasmi/autonlp-data-kpmg_nlp",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | adelgasmi | null | adelgasmi/autonlp-kpmg_nlp-18833547 | 9 | 1 | transformers | 12,205 | ---
tags: autonlp
language: ar
widget:
- text: "I love AutoNLP 🤗"
datasets:
- adelgasmi/autonlp-data-kpmg_nlp
co2_eq_emissions: 64.58945483765274
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 18833547
- CO2 Emissions (in grams): 64.58945483765274
## Validation Metrics
- Loss: 0.14247722923755646
- Accuracy: 0.9586074193404036
- Macro F1: 0.9468339778730883
- Micro F1: 0.9586074193404036
- Weighted F1: 0.9585551117678807
- Macro Precision: 0.9445436604001405
- Micro Precision: 0.9586074193404036
- Weighted Precision: 0.9591405429662925
- Macro Recall: 0.9499427161888565
- Micro Recall: 0.9586074193404036
- Weighted Recall: 0.9586074193404036
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/adelgasmi/autonlp-kpmg_nlp-18833547
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("adelgasmi/autonlp-kpmg_nlp-18833547", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("adelgasmi/autonlp-kpmg_nlp-18833547", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
akilesh96/autonlp-mrcooper_text_classification-529614927 | c3b7f6b54fae602df18fc9041766f0314c6a3e44 | 2022-01-25T19:43:57.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:akilesh96/autonlp-data-mrcooper_text_classification",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | akilesh96 | null | akilesh96/autonlp-mrcooper_text_classification-529614927 | 9 | null | transformers | 12,206 | ---
tags: autonlp
language: en
widget:
- text: "Not Many People Know About The City 1200 Feet Below Detroit"
- text: "Bob accepts the challenge, and the next week they're standing in Saint Peters square. 'This isnt gonna work, he's never going to see me here when theres this much people. You stay here, I'll go talk to him and you'll see me on the balcony, the guards know me too.' Half an hour later, Bob and the pope appear side by side on the balcony. Bobs boss gets a heart attack, and Bob goes to visit him in the hospital."
- text: "I’m sorry if you made it this far, but I’m just genuinely idk, I feel like I shouldn’t give up, it’s just getting harder to come back from stuff like this."
datasets:
- akilesh96/autonlp-data-mrcooper_text_classification
co2_eq_emissions: 5.999771405025692
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 529614927
- CO2 Emissions (in grams): 5.999771405025692
## Validation Metrics
- Loss: 0.7582379579544067
- Accuracy: 0.7636103151862464
- Macro F1: 0.770630619486531
- Micro F1: 0.7636103151862464
- Weighted F1: 0.765233270165301
- Macro Precision: 0.7746285216467107
- Micro Precision: 0.7636103151862464
- Weighted Precision: 0.7683270753840836
- Macro Recall: 0.7680576576961138
- Micro Recall: 0.7636103151862464
- Weighted Recall: 0.7636103151862464
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/akilesh96/autonlp-mrcooper_text_classification-529614927
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
alexander-karpov/bert-eatable-classification-en-ru | 51d582fc67dd77f70eb583c4f1f7d13f9e2b2f3b | 2021-07-31T21:01:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | alexander-karpov | null | alexander-karpov/bert-eatable-classification-en-ru | 9 | null | transformers | 12,207 | Entry not found |
ali2066/distilbert-base-uncased-finetuned-sst-2-english-finetuned-argumentative | 78c93fcc153addb562f8e0a648766aca9bc1c6ee | 2022-02-08T07:12:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ali2066 | null | ali2066/distilbert-base-uncased-finetuned-sst-2-english-finetuned-argumentative | 9 | null | transformers | 12,208 | Entry not found |
alireza7/TRANSFORMER-persian-base-tebyan | 7065a4a4b3096b940fdefe79bd7f2af7ed97905a | 2021-09-29T19:26:52.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/TRANSFORMER-persian-base-tebyan | 9 | null | transformers | 12,209 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
allenai/dsp_roberta_base_dapt_news_tapt_hyperpartisan_news_5015 | 85c3c8115e92d410390e987346e55edebb84a247 | 2021-05-20T13:12:07.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_dapt_news_tapt_hyperpartisan_news_5015 | 9 | null | transformers | 12,210 | Entry not found |
allenai/dsp_roberta_base_tapt_hyperpartisan_news_515 | b2a0ebf3616e07cba04450409675d9420dcf8dd3 | 2021-05-20T13:27:46.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_hyperpartisan_news_515 | 9 | null | transformers | 12,211 | Entry not found |
allenai/dsp_roberta_base_tapt_imdb_70000 | 295a3663d497a9e7c4041e8e0822cf1c28000b8f | 2021-05-20T13:30:26.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_imdb_70000 | 9 | null | transformers | 12,212 | Entry not found |
allenai/dsp_roberta_base_tapt_rct_180K | d5e67e4542d1c5acba9041d49ac12abda3561e89 | 2021-05-20T13:31:37.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_rct_180K | 9 | null | transformers | 12,213 | Entry not found |
aloxatel/3JQ | e6fc27d4ff611fba59bd4b4deffc0ae80c03f9eb | 2021-05-20T13:38:29.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | aloxatel | null | aloxatel/3JQ | 9 | null | transformers | 12,214 | Entry not found |
aloxatel/W1G | 8627076551bd2e7ee39efd886519ff40d5218034 | 2021-05-20T14:00:07.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | aloxatel | null | aloxatel/W1G | 9 | null | transformers | 12,215 | Entry not found |
amauboussin/twitter-toxicity-v0 | 7eb433811d3fa2a3873fa4d9f96631f5ff3dab49 | 2022-01-05T05:52:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | amauboussin | null | amauboussin/twitter-toxicity-v0 | 9 | null | transformers | 12,216 | Entry not found |
anirudh21/albert-base-v2-finetuned-wnli | af225845ad95ff6b151e0b128e0f832d48b17257 | 2022-01-25T16:57:16.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anirudh21 | null | anirudh21/albert-base-v2-finetuned-wnli | 9 | null | transformers | 12,217 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-base-v2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-wnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6878
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6878 | 0.5634 |
| No log | 2.0 | 80 | 0.6919 | 0.5634 |
| No log | 3.0 | 120 | 0.6877 | 0.5634 |
| No log | 4.0 | 160 | 0.6984 | 0.4085 |
| No log | 5.0 | 200 | 0.6957 | 0.5211 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
anirudh21/albert-xlarge-v2-finetuned-wnli | 872243a7d0fc6d0b4a8f9aab6083685a07253a53 | 2022-01-26T08:43:31.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anirudh21 | null | anirudh21/albert-xlarge-v2-finetuned-wnli | 9 | null | transformers | 12,218 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-xlarge-v2-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xlarge-v2-finetuned-wnli
This model is a fine-tuned version of [albert-xlarge-v2](https://huggingface.co/albert-xlarge-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6869
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6906 | 0.5070 |
| No log | 2.0 | 80 | 0.6869 | 0.5634 |
| No log | 3.0 | 120 | 0.6905 | 0.5352 |
| No log | 4.0 | 160 | 0.6960 | 0.4225 |
| No log | 5.0 | 200 | 0.7011 | 0.3803 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
arogyaGurkha/kobert-finetuned-squad_kor_v1 | be06b82eb9c4a2044d072d3f1b016e0f0e3dc60d | 2021-09-13T03:59:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_kor_v1",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | arogyaGurkha | null | arogyaGurkha/kobert-finetuned-squad_kor_v1 | 9 | null | transformers | 12,219 | ---
tags:
- generated_from_trainer
datasets:
- squad_kor_v1
model-index:
- name: kobert-finetuned-squad_kor_v1
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad_kor_v1
type: squad_kor_v1
args: squad_kor_v1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobert-finetuned-squad_kor_v1
This model is a fine-tuned version of [monologg/kobert](https://huggingface.co/monologg/kobert) on the squad_kor_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0155 | 1.0 | 3808 | 4.0928 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
tner/xlm-roberta-base-uncased-mit-movie-trivia | e44d3c8032ed9b6d36f19082de7fb36f66ffe513 | 2021-02-13T00:08:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tner | null | tner/xlm-roberta-base-uncased-mit-movie-trivia | 9 | null | transformers | 12,220 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-movie-trivia")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-movie-trivia")
``` |
bakrianoo/t5-arabic-base | 94aea39d7e38242e3efa2ad522789e6f61fe760b | 2021-06-26T17:05:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"Arabic",
"dataset:mc4",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | bakrianoo | null | bakrianoo/t5-arabic-base | 9 | null | transformers | 12,221 | ---
language: Arabic
datasets:
- mc4
license: apache-2.0
---
## Arabic T5 Base Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-base` model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
```
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
```
[Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
|
baykenney/bert-base-gpt2detector-topp92 | 5620750808eb7e050d67116467f2e15e4d99d243 | 2021-05-19T12:11:13.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | baykenney | null | baykenney/bert-base-gpt2detector-topp92 | 9 | null | transformers | 12,222 | Entry not found |
benmrtnz27/DialoGPT-small-misato | e8e699f526fff7afbd56351c31d8659a950a9332 | 2021-11-28T07:45:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | benmrtnz27 | null | benmrtnz27/DialoGPT-small-misato | 9 | null | transformers | 12,223 | ---
tags:
- conversational
---
# Misato Katsuragi DialoGPT Model
--- |
beomi/korean-lgbt-hatespeech-classifier | d42ae9eed42f2d48c3fecf22bf7c5516a1403793 | 2021-10-01T08:04:59.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | beomi | null | beomi/korean-lgbt-hatespeech-classifier | 9 | null | transformers | 12,224 | Entry not found |
bewgle/bart-large-mnli-bewgle | 83718c2198f023d85f8aaa161b9cf9340718a0f9 | 2020-12-09T18:30:05.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | false | bewgle | null | bewgle/bart-large-mnli-bewgle | 9 | null | transformers | 12,225 | ---
widget :
- text: "I like you. </s></s> I love you."
---
## bart-large-mnli
Trained by Facebook, [original source](https://github.com/pytorch/fairseq/tree/master/examples/bart)
|
bhavikardeshna/multilingual-bert-base-cased-arabic | 0f4e2c48b72c4d80b4847a8e90fb79c2090db985 | 2021-12-21T11:41:30.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bhavikardeshna | null | bhavikardeshna/multilingual-bert-base-cased-arabic | 9 | null | transformers | 12,226 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bigjoedata/friendlychatbot | b34a3241491566f352bf3e02c6c49e949dad96e7 | 2021-05-21T14:13:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | bigjoedata | null | bigjoedata/friendlychatbot | 9 | null | transformers | 12,227 | Entry not found |
bioformers/bioformer-cased-v1.0-mnli | 3d1dc4d84038d18d9970435b8c2e863c0d5eda60 | 2021-10-19T07:07:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | bioformers | null | bioformers/bioformer-cased-v1.0-mnli | 9 | null | transformers | 12,228 | [bioformer-cased-v1.0](https://huggingface.co/bioformers/bioformer-cased-v1.0) fined-tuned on the [MNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.803973
## Speed
In our experiments, the inference speed of Bioformer is 3x as fast as BERT-base/BioBERT/PubMedBERT, and is 40% faster than DistilBERT.
## More information
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. (source: https://huggingface.co/datasets/glue) |
boychaboy/kobias_klue-roberta-base | 6e43ed0732f10c5afef34b05b62af417000067dd | 2021-07-07T05:27:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/kobias_klue-roberta-base | 9 | null | transformers | 12,229 | Entry not found |
brad1141/bert-finetuned-ner | 00e5fbc7d74135f3c8a20b2997a242605879a93a | 2022-03-08T09:31:59.000Z | [
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brad1141 | null | brad1141/bert-finetuned-ner | 9 | 1 | transformers | 12,230 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6434
- Precision: 0.8589
- Recall: 0.8686
- F1: 0.8637
- Accuracy: 0.8324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.615 | 1.0 | 1741 | 0.6111 | 0.8200 | 0.8652 | 0.8420 | 0.8046 |
| 0.4795 | 2.0 | 3482 | 0.5366 | 0.8456 | 0.8803 | 0.8626 | 0.8301 |
| 0.3705 | 3.0 | 5223 | 0.5412 | 0.8527 | 0.8786 | 0.8655 | 0.8339 |
| 0.2749 | 4.0 | 6964 | 0.5906 | 0.8559 | 0.8711 | 0.8634 | 0.8316 |
| 0.2049 | 5.0 | 8705 | 0.6434 | 0.8589 | 0.8686 | 0.8637 | 0.8324 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
cahya/xls-r-ab-test | 68ad17dc1b1207cf647f8e239f77d154c84ea185 | 2022-03-23T18:29:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"model-index"
]
| automatic-speech-recognition | false | cahya | null | cahya/xls-r-ab-test | 9 | null | transformers | 12,231 | ---
language:
- ab
tags:
- ab
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 135.4675
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.10.3
|
castorini/tct_colbert-v2-hn-msmarco | 5ebac3ffc3def6cbb6625aab9dbeee8e4cc20e4d | 2021-08-12T01:06:21.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | castorini | null | castorini/tct_colbert-v2-hn-msmarco | 9 | null | transformers | 12,232 | This model is to reproduce a variant of TCT-ColBERT-V2 dense retrieval models described in the following paper:
> Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [In-Batch Negatives for Knowledge Distillation with Tightly-CoupledTeachers for Dense Retrieval.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_2021_RepL4NLP.pdf) _RepL4NLP 2021_.
You can find our reproduction report in Pyserini [here](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert-v2.md).
|
chitra/finetuned-adversial-paraphrase-model | 450da4af2fe501c8ac53b3d62144aaf82ae2cae5 | 2022-01-19T05:06:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | chitra | null | chitra/finetuned-adversial-paraphrase-model | 9 | null | transformers | 12,233 | Entry not found |
chrommium/xlm-roberta-large-finetuned-sent_in_news | 23bc7bfcb33a63467d2f9cb281e5f2fe6a710d6a | 2021-10-01T12:02:53.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | chrommium | null | chrommium/xlm-roberta-large-finetuned-sent_in_news | 9 | null | transformers | 12,234 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-large-finetuned-sent_in_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-sent_in_news
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8872
- Accuracy: 0.7273
- F1: 0.5125
## Model description
Модель ассиметрична, реагирует на метку X в тексте новости.
Попробуйте следующие примеры:
a) Агентство X понизило рейтинг банка Fitch.
b) Агентство Fitch понизило рейтинг банка X.
a) Компания Финам показала рекордную прибыль, говорят аналитики компании X.
b) Компания X показала рекордную прибыль, говорят аналитики компании Финам.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 106 | 1.2526 | 0.6108 | 0.1508 |
| No log | 2.0 | 212 | 1.1553 | 0.6648 | 0.1141 |
| No log | 3.0 | 318 | 1.1150 | 0.6591 | 0.1247 |
| No log | 4.0 | 424 | 1.0007 | 0.6705 | 0.1383 |
| 1.1323 | 5.0 | 530 | 0.9267 | 0.6733 | 0.2027 |
| 1.1323 | 6.0 | 636 | 1.0869 | 0.6335 | 0.4084 |
| 1.1323 | 7.0 | 742 | 1.1224 | 0.6932 | 0.4586 |
| 1.1323 | 8.0 | 848 | 1.2535 | 0.6307 | 0.3424 |
| 1.1323 | 9.0 | 954 | 1.4288 | 0.6932 | 0.4881 |
| 0.5252 | 10.0 | 1060 | 1.5856 | 0.6932 | 0.4739 |
| 0.5252 | 11.0 | 1166 | 1.7101 | 0.6733 | 0.4530 |
| 0.5252 | 12.0 | 1272 | 1.7330 | 0.6903 | 0.4750 |
| 0.5252 | 13.0 | 1378 | 1.8872 | 0.7273 | 0.5125 |
| 0.5252 | 14.0 | 1484 | 1.8797 | 0.7301 | 0.5033 |
| 0.1252 | 15.0 | 1590 | 1.9339 | 0.7330 | 0.5024 |
| 0.1252 | 16.0 | 1696 | 1.9632 | 0.7301 | 0.4967 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
congcongwang/t5-large-fine-tuned-wnut-2020-task3 | 6334b7d12041bcd5650c1dce3cf1a842f50c735e | 2021-06-23T12:13:01.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | congcongwang | null | congcongwang/t5-large-fine-tuned-wnut-2020-task3 | 9 | null | transformers | 12,235 | Entry not found |
copenlu/citebert-cite-only | 9140f5bb5c36155d929b2ffd750695207a1d865d | 2022-04-06T08:27:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | copenlu | null | copenlu/citebert-cite-only | 9 | null | transformers | 12,236 | Entry not found |
daekeun-ml/koelectra-small-v3-korsts | ce2a986e65d726120ae472a8bd62910f60707076 | 2022-02-13T06:22:23.000Z | [
"pytorch",
"electra",
"text-classification",
"ko",
"dataset:korsts",
"transformers",
"classification",
"sentence similarity",
"license:cc-by-4.0"
]
| text-classification | false | daekeun-ml | null | daekeun-ml/koelectra-small-v3-korsts | 9 | null | transformers | 12,237 | ---
language:
- ko
tags:
- classification
- sentence similarity
license: cc-by-4.0
datasets:
- korsts
metrics:
- accuracy
- f1
- precision
- recall
---
# Similarity between two sentences (fine-tuning with KoELECTRA-Small-v3 model and KorSTS dataset)
## Usage (Amazon SageMaker inference applicable)
It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.
### inference_korsts.py
```python
import json
import sys
import logging
import torch
from torch import nn
from transformers import ElectraConfig
from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification
logging.basicConfig(
level=logging.INFO,
format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(filename='tmp.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
max_seq_length = 128
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-korsts")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
def model_fn(model_path):
####
# If you have your own trained model
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
####
#config = ElectraConfig.from_json_file(f'{model_path}/config.json')
#model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config)
model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-korsts')
model.to(device)
return model
def input_fn(input_data, content_type="application/jsonlines"):
data_str = input_data.decode("utf-8")
jsonlines = data_str.split("\n")
transformed_inputs = []
for jsonline in jsonlines:
text = json.loads(jsonline)["text"]
logger.info("input text: {}".format(text))
encode_plus_token = tokenizer.encode_plus(
text,
max_length=max_seq_length,
add_special_tokens=True,
return_token_type_ids=False,
padding="max_length",
return_attention_mask=True,
return_tensors="pt",
truncation=True,
)
transformed_inputs.append(encode_plus_token)
return transformed_inputs
def predict_fn(transformed_inputs, model):
predicted_classes = []
for data in transformed_inputs:
data = data.to(device)
output = model(**data)
prediction_dict = {}
prediction_dict['score'] = output[0].squeeze().cpu().detach().numpy().tolist()
jsonline = json.dumps(prediction_dict)
logger.info("jsonline: {}".format(jsonline))
predicted_classes.append(jsonline)
predicted_classes_jsonlines = "\n".join(predicted_classes)
return predicted_classes_jsonlines
def output_fn(outputs, accept="application/jsonlines"):
return outputs, accept
```
### test.py
```python
>>> from inference_korsts import model_fn, input_fn, predict_fn, output_fn
>>> with open('./samples/korsts.txt', mode='rb') as file:
>>> model_input_data = file.read()
>>> model = model_fn()
>>> transformed_inputs = input_fn(model_input_data)
>>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model)
>>> model_outputs = output_fn(predicted_classes_jsonlines)
>>> print(model_outputs[0])
[{inference_korsts.py:44} INFO - input text: ['맛있는 라면을 먹고 싶어요', '후루룩 쩝쩝 후루룩 쩝쩝 맛좋은 라면']
[{inference_korsts.py:44} INFO - input text: ['뽀로로는 내친구', '머신러닝은 러닝머신이 아닙니다.']
[{inference_korsts.py:71} INFO - jsonline: {"score": 4.786738872528076}
[{inference_korsts.py:71} INFO - jsonline: {"score": 0.2319069355726242}
{"score": 4.786738872528076}
{"score": 0.2319069355726242}
```
### Sample data (samples/korsts.txt)
```
{"text": ["맛있는 라면을 먹고 싶어요", "후루룩 쩝쩝 후루룩 쩝쩝 맛좋은 라면"]}
{"text": ["뽀로로는 내친구", "머신러닝은 러닝머신이 아닙니다."]}
```
## References
- KoELECTRA: https://github.com/monologg/KoELECTRA
- KorNLI and KorSTS Dataset: https://github.com/kakaobrain/KorNLUDatasets |
danurahul/alex_gpt3_Doctextfull | ad255c1a834f232ad1bd24567498221337bf1cd0 | 2021-05-21T15:18:16.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | danurahul | null | danurahul/alex_gpt3_Doctextfull | 9 | null | transformers | 12,238 | Entry not found |
dbmdz/electra-base-ukrainian-cased-discriminator | 1c4d59174274c0aeeb550f64c820495b0b789938 | 2020-11-10T12:26:52.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| null | false | dbmdz | null | dbmdz/electra-base-ukrainian-cased-discriminator | 9 | null | transformers | 12,239 | Entry not found |
delpart/distilbert-base-uncased-finetuned-ner | 588b9e61ba7a6ffb968951eae40378042964e0c5 | 2021-10-06T03:58:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | delpart | null | delpart/distilbert-base-uncased-finetuned-ner | 9 | null | transformers | 12,240 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.925115970841617
- name: Recall
type: recall
value: 0.9370175634858485
- name: F1
type: f1
value: 0.9310287333963209
- name: Accuracy
type: accuracy
value: 0.9839388692074285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9251
- Recall: 0.9370
- F1: 0.9310
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2435 | 1.0 | 878 | 0.0685 | 0.9182 | 0.9221 | 0.9202 | 0.9816 |
| 0.0515 | 2.0 | 1756 | 0.0602 | 0.9212 | 0.9368 | 0.9289 | 0.9834 |
| 0.0301 | 3.0 | 2634 | 0.0602 | 0.9251 | 0.9370 | 0.9310 | 0.9839 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
diegozs97/finetuned-chemprot-seed-0-2000k | ac6b73bad1aa48624f9998970dee982b0dd82347 | 2021-12-07T05:16:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-2000k | 9 | null | transformers | 12,241 | Entry not found |
diwank/silicone-deberta-pair | 405127a73ef60674b1780e053cd52e538bf7ba7d | 2022-03-07T08:43:13.000Z | [
"pytorch",
"tf",
"deberta",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | diwank | null | diwank/silicone-deberta-pair | 9 | null | transformers | 12,242 | ---
license: mit
---
# diwank/silicone-deberta-pair
`deberta-base`-based dialog acts classifier. Trained on the `balanced` variant of the [silicone-merged](https://huggingface.co/datasets/diwank/silicone-merged) dataset: a simplified merged dialog act data from datasets in the [silicone](https://huggingface.co/datasets/silicone) collection.
Takes two sentences as inputs (one previous and one current utterance of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. **Outputs one of 11 labels**:
```python
(0, 'acknowledge')
(1, 'answer')
(2, 'backchannel')
(3, 'reply_yes')
(4, 'exclaim')
(5, 'say')
(6, 'reply_no')
(7, 'hold')
(8, 'ask')
(9, 'intent')
(10, 'ask_yes_no')
```
## Example:
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/silicone-deberta-pair")
convert_to_label = lambda n: [
['acknowledge',
'answer',
'backchannel',
'reply_yes',
'exclaim',
'say',
'reply_no',
'hold',
'ask',
'intent',
'ask_yes_no'
][i] for i in n
]
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label(predictions) # answer
```
## Report from W&B
https://wandb.ai/diwank/da-silicone-combined/reports/silicone-deberta-pair--VmlldzoxNTczNjE5?accessToken=yj1jz4c365z0y5b3olgzye7qgsl7qv9lxvqhmfhtb6300hql6veqa5xiq1skn8ys |
dkleczek/Polish-Hate-Speech-Detection-Herbert-Large | b1fca3e0d8f7730d5a14c347389b8d45321a8cf1 | 2021-07-16T05:46:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | dkleczek | null | dkleczek/Polish-Hate-Speech-Detection-Herbert-Large | 9 | null | transformers | 12,243 | Entry not found |
dkleczek/Polish_RoBERTa_v2_base_OPI | 83be73044078a76486d982c7c5e54fbf11603002 | 2021-08-26T22:10:12.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | dkleczek | null | dkleczek/Polish_RoBERTa_v2_base_OPI | 9 | null | transformers | 12,244 | Entry not found |
domdomreloaded/bert-base-uncased-finetuned-swag | 41422646b7fc1b772dce661d6f63832986631b10 | 2022-01-18T22:33:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:swag",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| multiple-choice | false | domdomreloaded | null | domdomreloaded/bert-base-uncased-finetuned-swag | 9 | null | transformers | 12,245 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6045
- Accuracy: 0.7960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 |
| 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
dram-conflict/horror-scripts | 492f0c551a02b6d4719de028a3cbb74d2b8a46a8 | 2022-02-18T01:02:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | dram-conflict | null | dram-conflict/horror-scripts | 9 | null | transformers | 12,246 | Entry not found |
ehdwns1516/gpt3-kor-based_gpt2_review_SR5 | c8ec3a1243332465d64d51fa0ea2e28842a97e86 | 2021-07-23T01:19:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ehdwns1516 | null | ehdwns1516/gpt3-kor-based_gpt2_review_SR5 | 9 | null | transformers | 12,247 | # ehdwns1516/gpt3-kor-based_gpt2_review_SR5
* This model has been trained Korean dataset as a star of 5 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR1](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR1)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR2](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR2)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR3](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR3)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR4](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR4)
* [ehdwns1516/gpt3-kor-based_gpt2_review_SR5](https://huggingface.co/ehdwns1516/gpt3-kor-based_gpt2_review_SR5)
## Overview
Language model: [gpt3-kor-small_based_on_gpt2](https://huggingface.co/kykim/gpt3-kor-small_based_on_gpt2)
Language: Korean
Training data: review_body dataset with a star of 5 in the [naver shopping reivew dataset](https://github.com/bab2min/corpus/tree/master/sentiment).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR5")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt3-kor-based_gpt2_review_SR5")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt3-kor-based_gpt2_review_SR5",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
eliza-dukim/bert-base-finetuned-sts-deprecated | 928fa592869042bf88fc8233b0933b042cd41d14 | 2021-08-11T02:04:53.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer"
]
| text-classification | false | eliza-dukim | null | eliza-dukim/bert-base-finetuned-sts-deprecated | 9 | null | transformers | 12,248 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model_index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metric:
name: Pearsonr
type: pearsonr
value: 0.837527365741951
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5657
- Pearsonr: 0.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 92 | 0.8280 | 0.7680 |
| No log | 2.0 | 184 | 0.6602 | 0.8185 |
| No log | 3.0 | 276 | 0.5939 | 0.8291 |
| No log | 4.0 | 368 | 0.5765 | 0.8367 |
| No log | 5.0 | 460 | 0.5657 | 0.8375 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
emrecan/distilbert-base-turkish-cased-snli_tr | 8f884f912e85a0eb4ae79bf86a0331305b73b44e | 2021-12-01T19:42:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
]
| zero-shot-classification | false | emrecan | null | emrecan/distilbert-base-turkish-cased-snli_tr | 9 | null | transformers | 12,249 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
enod/esg-bert | 9e022f71e4bfaac6774077ed4f4f42058b44dcf8 | 2021-12-01T00:41:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | enod | null | enod/esg-bert | 9 | null | transformers | 12,250 | Entry not found |
ensamblador/gpt2-es-8heads | 455748b17ce17739daa855a69998cdceb581d512 | 2021-05-21T15:53:28.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ensamblador | null | ensamblador/gpt2-es-8heads | 9 | null | transformers | 12,251 | Entry not found |
ffsouza/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro | aba70a6c4b11e0b01b45c664f91752552a38b73c | 2021-12-03T21:51:30.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16_en_ro_pre_processed",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ffsouza | null | ffsouza/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro | 9 | null | transformers | 12,252 | ---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
model-index:
- name: t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
flax-community/roberta-pretraining-hindi | 91428432765ef16c941482f0c180488a8e807766 | 2021-07-15T15:44:11.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | flax-community | null | flax-community/roberta-pretraining-hindi | 9 | null | transformers | 12,253 | ---
widget:
- text: "शुभ प्रभात। आशा करता हूं कि आपका <mask> शुभ हो"
---
roberta-pretraining-hindi |
flexudy/cheapity3 | e6f2540cb8b5ce31684af3f42e3fa70204bbf7d2 | 2021-12-27T13:06:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | flexudy | null | flexudy/cheapity3 | 9 | null | transformers | 12,254 | # Cheapity3 🐷
GPT-like T5 model trained to generate text in multiple languages.
## Motivation
- GPT models are expensive to run.
- GPT models are monolingual.
## Solution
- Maybe, Small Models aren't Terrible (*SMarT*)
- Plus, they are cheaper to run.
I fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from
various domains like tech, law, finance and science etc. to generate text, just like GPT models do.
## Usage - [NLPlayStore](https://github.com/flexudy/NLPlayStore) 👈
```python
from store.service_management import ServiceManager
service_manager = ServiceManager().get_service("cheapity3")
service.install()
service = service.launch()
input_text = "The mechanical engineering field requires ... "
generated_texts = service.play(input_text, 15) # A list a generated text
```
## Usage - Hugging Face Transformers 🤗
- Provide some text e.g `"Italy, officially the Italian Republic is a country consisting of"`
- Tell Cheapity3 how many words you want to generate e.g `15` -- 😃 Yes, you can control the length.
- Cheapity3 reads your text and generates a continuation containing approximately 15 words.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("flexudy/cheapity3")
model = AutoModelWithLMHead.from_pretrained("flexudy/cheapity3")
input_text = """The mechanical engineering field requires an understanding of core areas including mechanics, dynamics,
thermodynamics, materials science, structural analysis, and
electricity. { _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ }""" # 15 words
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512)
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_length=128,
do_sample=True,
early_stopping=True,
num_return_sequences=4,
repetition_penalty=2.5
)
for i in range(4):
print(tokenizer.decode(outputs[i], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
**INPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.**
```
> Cheapity3 continues with beam search:
... The field of mechanical engineering is a broad field that includes many core areas of engineering.
> Cheapity3 continues with sampling and top_k=50:
... Developing the knowledge base for these core areas will enable engineers to build their capabilities rapidly and efficiently. ...
... The field of mechanics offers a variety and broad range for applications throughout the engineering/technological fields. ...
... Mechanics generally is not understood by students. While they can be employed in the field, mechanical engineering ...
... Introduction to mechanical engineering and core fields including chemical products, materials science, structural analysis, and geomatics ...
```
## Pretty decent right?
Hence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue 🤗.
## Model Training FYI
- T5-base model
- Trained on ONLY 1M sentences from English, French and German text
- Mostly text from Wikipedia, arxiv and QA datasets
- Learning rate: 0.00003
- 2 epochs
- Max input: 512 tokens
- Max output: 128 tokens
|
aware-ai/t5-skills | 69bf6bb62eb73a132d4a0a96fe16ba529409a39e | 2021-11-25T16:56:51.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"input",
"output",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | aware-ai | null | aware-ai/t5-skills | 9 | null | transformers | 12,255 | ---
language:
- input
- output
tags:
- generated_from_trainer
model-index:
- name: t5-skills
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-skills
This model is a fine-tuned version of [flozi00/t5-skills](https://huggingface.co/flozi00/t5-skills) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.14.0
- Tokenizers 0.10.2
|
gchhablani/fnet-large-finetuned-stsb | 5ec8f5c51fd54066b52f4330884ca63f753b5e72 | 2021-10-07T17:02:23.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | gchhablani | null | gchhablani/fnet-large-finetuned-stsb | 9 | null | transformers | 12,256 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: fnet-large-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8532669137129205
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-stsb
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6250
- Pearson: 0.8554
- Spearmanr: 0.8533
- Combined Score: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.0727 | 1.0 | 1438 | 0.7718 | 0.8187 | 0.8240 | 0.8214 |
| 0.4619 | 2.0 | 2876 | 0.7704 | 0.8472 | 0.8500 | 0.8486 |
| 0.2401 | 3.0 | 4314 | 0.6250 | 0.8554 | 0.8533 | 0.8543 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ghadeermobasher/BC5CDR-Chem-Modified_biobert-v1.1_latest | 2016416bc3c43cf145bc951ad123ecb69cabdf12 | 2022-02-21T22:06:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified_biobert-v1.1_latest | 9 | null | transformers | 12,257 | Entry not found |
ghadeermobasher/BC5CDR-Chem-Modified_pubmed_abstract_3 | d6a51e6b518aca6be4e40f3606b2c3dd1a6de723 | 2022-02-16T19:57:27.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified_pubmed_abstract_3 | 9 | null | transformers | 12,258 | Entry not found |
ghadeermobasher/BC5CDR-Chem-Modified_pubmed_abstract_latest | 603c408bb6a89bf7b8bb1e7cd4626a43ecf319ec | 2022-02-21T22:04:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified_pubmed_abstract_latest | 9 | null | transformers | 12,259 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-Disease-balanced-BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext | 7d0b2a183dae2e92b809b5eecade85b605009213 | 2022-01-23T01:03:07.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-Disease-balanced-BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext | 9 | null | transformers | 12,260 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-from-PubMedBERT-fulltext | 7bf86432a9b8bd55531b865bdfe2c1074f8917a4 | 2022-01-24T18:42:18.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-Disease-balanced-SapBERT-from-PubMedBERT-fulltext | 9 | null | transformers | 12,261 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-Disease-balanced-pubmedbert | ed9ecde574662e1c2c5a22cb1e3531acbd929896 | 2022-02-07T12:40:46.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-Disease-balanced-pubmedbert | 9 | null | transformers | 12,262 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-Disease-balanced-scibert_scivocab_cased | aaf1c8eeb5ddcfae8a5c3a2d1286ff0b8a423eaa | 2022-01-23T19:01:49.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-Disease-balanced-scibert_scivocab_cased | 9 | null | transformers | 12,263 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-imbalanced-PubMedBERT-base-uncased-abstract_latest | de3987fb6f95a1f21d8707cfcff1cdea1b41969d | 2022-02-21T22:58:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-imbalanced-PubMedBERT-base-uncased-abstract_latest | 9 | null | transformers | 12,264 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-imbalanced-biobert | 5c3d6cdf4b37fc31bfdd9e65814417ef30a20395 | 2022-02-10T13:26:22.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-imbalanced-biobert | 9 | null | transformers | 12,265 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-imbalanced-pubmedbert | 14353f357e0d49a5e30c3078a5a79e6534b1bd54 | 2022-02-10T15:07:22.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-imbalanced-pubmedbert | 9 | null | transformers | 12,266 | Entry not found |
ghadeermobasher/BC5CDR-Chemical-imbalanced-scibert_scivocab_uncased_latest | 3709fe8be767587c4bdffcf8924a611ebe8890a3 | 2022-02-21T23:04:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-imbalanced-scibert_scivocab_uncased_latest | 9 | null | transformers | 12,267 | Entry not found |
ghadeermobasher/BC5CDR-Chemical_Modified_PubMedBERT | d8f4c0d38cd50c98bf7c036a8a920595f99c098e | 2022-01-21T22:46:09.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical_Modified_PubMedBERT | 9 | null | transformers | 12,268 | Entry not found |
ghadeermobasher/BC5CDR-Chemical_Modified_scibert_scivocab_cased | 283714094976904d9d96ce68088cc5857d39c15c | 2022-01-23T16:41:46.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical_Modified_scibert_scivocab_cased | 9 | null | transformers | 12,269 | Entry not found |
ghadeermobasher/CRAFT-Chem-Modified_PubMedBERT | 59365e10a3aec99be8da1071815827528e652085 | 2022-01-22T02:07:22.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/CRAFT-Chem-Modified_PubMedBERT | 9 | null | transformers | 12,270 | Entry not found |
google/t5-11b-ssm-nqo | 4807e5896d668a43e92df3da1e66a84c14aed261 | 2020-12-07T08:43:03.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-11b-ssm-nqo | 9 | null | transformers | 12,271 | ---
language: en
datasets:
- c4
- wikipedia
- natural_questions
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 90% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-large|https://huggingface.co/google/t5-large-ssm-nqo|29.0|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nqo|35.2|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nqo|31.7|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-nqo**|**34.8**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-nqo")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-nqo")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
google/t5-11b-ssm | be64585e27c466549dcb886988bd9265f984b1c4 | 2020-12-07T19:49:11.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-11b-ssm | 9 | null | transformers | 12,272 | ---
language: en
datasets:
- c4
- wikipedia
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
google/t5-efficient-base-nl16 | 102f0df0fb277e62d1123552a3ead479e3cb8571 | 2022-02-15T10:53:21.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-base-nl16 | 9 | null | transformers | 12,273 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-BASE-NL16 (Deep-Narrow version)
T5-Efficient-BASE-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-nl16** - is of model type **Base** with the following variations:
- **nl** is **16**
It has **289.02** million parameters and thus requires *ca.* **1156.07 MB** of memory in full precision (*fp32*)
or **578.03 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-el32 | 0e52acdf0f692e0550f21e7c924b5083196da5a9 | 2022-02-15T10:54:02.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-el32 | 9 | null | transformers | 12,274 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-EL32 (Deep-Narrow version)
T5-Efficient-SMALL-EL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-el32** - is of model type **Small** with the following variations:
- **el** is **32**
It has **142.36** million parameters and thus requires *ca.* **569.44 MB** of memory in full precision (*fp32*)
or **284.72 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-nl22 | 4a2a1d294c3ce9567b278a19284939baa96372de | 2022-02-15T10:50:49.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-nl22 | 9 | null | transformers | 12,275 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-NL22 (Deep-Narrow version)
T5-Efficient-SMALL-NL22 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-nl22** - is of model type **Small** with the following variations:
- **nl** is **22**
It has **178.04** million parameters and thus requires *ca.* **712.16 MB** of memory in full precision (*fp32*)
or **356.08 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-nl36 | fe587de76b4a6cf711b3bdf6d1f0d3bb4d099056 | 2022-02-15T10:57:12.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-nl36 | 9 | 2 | transformers | 12,276 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-NL36 (Deep-Narrow version)
T5-Efficient-SMALL-NL36 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-nl36** - is of model type **Small** with the following variations:
- **nl** is **36**
It has **280.87** million parameters and thus requires *ca.* **1123.47 MB** of memory in full precision (*fp32*)
or **561.74 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-nl48 | 23bed4ecbe0a1a568f887c82ee5350ce264f515a | 2022-02-15T10:50:59.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-nl48 | 9 | null | transformers | 12,277 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-NL48 (Deep-Narrow version)
T5-Efficient-SMALL-NL48 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-nl48** - is of model type **Small** with the following variations:
- **nl** is **48**
It has **369.01** million parameters and thus requires *ca.* **1476.03 MB** of memory in full precision (*fp32*)
or **738.02 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-tiny-ff3000 | b2f2c4021032495fca6dffcdd164b49838256484 | 2022-02-15T10:54:30.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-tiny-ff3000 | 9 | null | transformers | 12,278 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-TINY-FF3000 (Deep-Narrow version)
T5-Efficient-TINY-FF3000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-ff3000** - is of model type **Tiny** with the following variations:
- **ff** is **3000**
It has **23.97** million parameters and thus requires *ca.* **95.88 MB** of memory in full precision (*fp32*)
or **47.94 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-xl-nl28 | 1acfe721972953fe6104d319d659e0ff06c053f4 | 2022-02-15T10:54:36.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-xl-nl28 | 9 | null | transformers | 12,279 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-XL-NL28 (Deep-Narrow version)
T5-Efficient-XL-NL28 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xl-nl28** - is of model type **Xl** with the following variations:
- **nl** is **28**
It has **3321.45** million parameters and thus requires *ca.* **13285.79 MB** of memory in full precision (*fp32*)
or **6642.9 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-xxl-nl4 | 8b983973ed25dba3d0153055c5f7206e890782ac | 2022-02-15T10:54:39.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-xxl-nl4 | 9 | null | transformers | 12,280 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-XXL-NL4 (Deep-Narrow version)
T5-Efficient-XXL-NL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xxl-nl4** - is of model type **Xxl** with the following variations:
- **nl** is **4**
It has **1207.34** million parameters and thus requires *ca.* **4829.35 MB** of memory in full precision (*fp32*)
or **2414.68 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-xl-ssm-nq | dac789cc92a7bcf4a501ac0f9ff7af73f04cd6ff | 2020-12-07T08:41:47.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-xl-ssm-nq | 9 | null | transformers | 12,281 | ---
language: en
datasets:
- c4
- wikipedia
- natural_questions
pipeline_tag: text2text-generation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|**T5-xl**|**https://huggingface.co/google/t5-xl-ssm-nq**|**35.6**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nq|37.9|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nq|33.2|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nq|36.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xl-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xl-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
google/tapas-tiny-finetuned-tabfact | 65033a6c84cd15d222a718c0398b0a7aefe3e655 | 2021-11-29T13:06:24.000Z | [
"pytorch",
"tf",
"tapas",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"transformers",
"sequence-classification",
"license:apache-2.0"
]
| text-classification | false | google | null | google/tapas-tiny-finetuned-tabfact | 9 | null | transformers | 12,282 | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS tiny model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_tiny`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` |
gurkan08/turkish-ner | 740598b9aa512f9bbd78f7bc43afa67682ea42d7 | 2021-05-19T17:52:22.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | gurkan08 | null | gurkan08/turkish-ner | 9 | null | transformers | 12,283 | Entry not found |
haotieu/vietnamese-summarization | ffc08e83a2a42c8595b6bfb857c12f69589b2a41 | 2022-01-24T15:31:00.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | haotieu | null | haotieu/vietnamese-summarization | 9 | null | transformers | 12,284 | Entry not found |
harish/EN-AStitchTask1A-XLNet-TrueFalse-0-FewShot-0-BEST | e9d991619f6a338c5aa4a404b3965119a517808f | 2021-09-05T00:37:35.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
]
| text-classification | false | harish | null | harish/EN-AStitchTask1A-XLNet-TrueFalse-0-FewShot-0-BEST | 9 | null | transformers | 12,285 | Entry not found |
heliart/PhishingEmailGeneration | 39c4be0817e86b6d88c6de2ea3b06751f9941960 | 2021-11-16T05:59:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | heliart | null | heliart/PhishingEmailGeneration | 9 | null | transformers | 12,286 | Entry not found |
hireddivas/dialoGPT-small-sonic | abaafb8c757f5958b2944c579357d76503039f67 | 2022-05-20T03:19:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | hireddivas | null | hireddivas/dialoGPT-small-sonic | 9 | null | transformers | 12,287 | ---
tags:
- conversational
---
GPT-2 chatbot - talk to Sonic |
huggingartists/death-grips | bfe45a9a2a6c381f9bcd577c2a3279bba76c6a92 | 2022-02-12T08:56:17.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/death-grips",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/death-grips | 9 | null | transformers | 12,288 | ---
language: en
datasets:
- huggingartists/death-grips
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/de4ca387303c4b46007ca1072c2e57d0.600x600x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Death Grips</div>
<a href="https://genius.com/artists/death-grips">
<div style="text-align: center; font-size: 14px;">@death-grips</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Death Grips.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/death-grips).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/death-grips")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2hmeenl7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Death Grips's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/226ak5bw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/226ak5bw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/death-grips')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/death-grips")
model = AutoModelWithLMHead.from_pretrained("huggingartists/death-grips")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/epic-rap-battles-of-history | 6ed5305ab67f4eacc9c4a2af78243f4c51562f34 | 2021-08-31T07:38:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/epic-rap-battles-of-history",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/epic-rap-battles-of-history | 9 | null | transformers | 12,289 | ---
language: en
datasets:
- huggingartists/epic-rap-battles-of-history
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/86da58e97d308e9127100e7954dc1d74.900x900x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Epic Rap Battles of History</div>
<a href="https://genius.com/artists/epic-rap-battles-of-history">
<div style="text-align: center; font-size: 14px;">@epic-rap-battles-of-history</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Epic Rap Battles of History.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/epic-rap-battles-of-history).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/epic-rap-battles-of-history")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/ujomrrjb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Epic Rap Battles of History's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1s03lfls) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1s03lfls/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/epic-rap-battles-of-history')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/epic-rap-battles-of-history")
model = AutoModelWithLMHead.from_pretrained("huggingartists/epic-rap-battles-of-history")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/ghostemane | 2922f4d75a31a520480ff0ef1928844a93532f13 | 2021-08-10T10:49:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/ghostemane",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/ghostemane | 9 | null | transformers | 12,290 | ---
language: en
datasets:
- huggingartists/ghostemane
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c4407bb331c50916c1dfdc7f875f73a9.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ghostemane</div>
<a href="https://genius.com/artists/ghostemane">
<div style="text-align: center; font-size: 14px;">@ghostemane</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Ghostemane.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/ghostemane).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/ghostemane")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ou29taa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Ghostemane's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/futdflju) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/futdflju/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/ghostemane')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/ghostemane")
model = AutoModelWithLMHead.from_pretrained("huggingartists/ghostemane")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/linkin-park | adabd3c3ea6f23c9ddc114975eeb245be2214b36 | 2021-10-30T14:56:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/linkin-park",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/linkin-park | 9 | null | transformers | 12,291 | ---
language: en
datasets:
- huggingartists/linkin-park
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a865aac7693c39977b9b402dc364908e.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Linkin Park</div>
<a href="https://genius.com/artists/linkin-park">
<div style="text-align: center; font-size: 14px;">@linkin-park</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Linkin Park.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/linkin-park).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/linkin-park")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3mtr0u4z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Linkin Park's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/linkin-park')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/linkin-park")
model = AutoModelWithLMHead.from_pretrained("huggingartists/linkin-park")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/anotheredenrpg | b8cd04701b7d9431a3021d1734d69df51f04bcab | 2021-05-21T19:10:05.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/anotheredenrpg | 9 | null | transformers | 12,292 | ---
language: en
thumbnail: https://www.huggingtweets.com/anotheredenrpg/1615596205043/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1354989664906563587/F91Gg-Qj_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Another Eden: The Cat Beyond Time and Space 🤖 AI Bot </div>
<div style="font-size: 15px">@anotheredenrpg bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@anotheredenrpg's tweets](https://twitter.com/anotheredenrpg).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1651 |
| Retweets | 82 |
| Short tweets | 75 |
| Tweets kept | 1494 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1gpo3g75/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anotheredenrpg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rb3206ol) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rb3206ol/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/anotheredenrpg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/arsondoer | cfb9994c0f121140818b973106114ed4618d078b | 2021-05-21T19:28:10.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/arsondoer | 9 | null | transformers | 12,293 | ---
language: en
thumbnail: https://www.huggingtweets.com/arsondoer/1616645630695/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1342836590998134786/tDwNDfFs_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">frostington ambassady the third (5’2”) 🤖 AI Bot </div>
<div style="font-size: 15px">@arsondoer bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@arsondoer's tweets](https://twitter.com/arsondoer).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3200 |
| Retweets | 270 |
| Short tweets | 799 |
| Tweets kept | 2131 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mhuavj6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @arsondoer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fz88vjc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fz88vjc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/arsondoer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/benrcongdon | 3f768578ec763070709e371eec69dfb3793268a9 | 2021-05-21T20:23:27.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/benrcongdon | 9 | null | transformers | 12,294 | ---
language: en
thumbnail: https://www.huggingtweets.com/benrcongdon/1616637140236/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1108382380723597313/hDIUPnFe_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ben Congdon 🤖 AI Bot </div>
<div style="font-size: 15px">@benrcongdon bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@benrcongdon's tweets](https://twitter.com/benrcongdon).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3219 |
| Retweets | 394 |
| Short tweets | 515 |
| Tweets kept | 2310 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3aazmqd6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @benrcongdon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/7zvkav4e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/7zvkav4e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/benrcongdon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/bleaksigilkeep | 2eb468ee644ecaa82c6fab2156e51595e4162c2b | 2021-05-21T20:47:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/bleaksigilkeep | 9 | null | transformers | 12,295 | ---
language: en
thumbnail: https://www.huggingtweets.com/bleaksigilkeep/1614100737277/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1357543393757433856/lBrNiipb_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ghislaine Maxwell's Fat Tiddies Apologist 🤖 AI Bot </div>
<div style="font-size: 15px">@bleaksigilkeep bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@bleaksigilkeep's tweets](https://twitter.com/bleaksigilkeep).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3206 |
| Retweets | 1132 |
| Short tweets | 387 |
| Tweets kept | 1687 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/200hepvo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bleaksigilkeep's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ugdbbm9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ugdbbm9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bleaksigilkeep')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/butfurniture | 121054cdddcf4a1540ac39159c0b983d5894d325 | 2021-05-21T21:27:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/butfurniture | 9 | null | transformers | 12,296 | ---
language: en
thumbnail: https://www.huggingtweets.com/butfurniture/1616690321353/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/727827522436521984/ABgwelzi_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Matthew 🤖 AI Bot </div>
<div style="font-size: 15px">@butfurniture bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@butfurniture's tweets](https://twitter.com/butfurniture).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1787 |
| Retweets | 524 |
| Short tweets | 121 |
| Tweets kept | 1142 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18eo7tos/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @butfurniture's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jx81czr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jx81czr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/butfurniture')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/chafickle | 4129337b9e77c19a7c608e2871c796f289c8832f | 2021-05-21T22:02:33.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/chafickle | 9 | null | transformers | 12,297 | ---
language: en
thumbnail: https://www.huggingtweets.com/chafickle/1617818342784/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1376951341709418502/GGg8ox0R_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">chafenti 🌸 🤖 AI Bot </div>
<div style="font-size: 15px">@chafickle bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@chafickle's tweets](https://twitter.com/chafickle).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3199 |
| Retweets | 530 |
| Short tweets | 845 |
| Tweets kept | 1824 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/npilc6ji/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chafickle's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b1pr6zw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b1pr6zw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chafickle')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/confusionm8trix | 8aa8589bdf79e34a6cebb43672ce561c494c0e44 | 2021-05-21T23:21:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/confusionm8trix | 9 | null | transformers | 12,298 | ---
language: en
thumbnail: https://www.huggingtweets.com/confusionm8trix/1616684874152/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1293318707469410304/OfdJ5rPz_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">henry 🤖 AI Bot </div>
<div style="font-size: 15px">@confusionm8trix bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@confusionm8trix's tweets](https://twitter.com/confusionm8trix).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 766 |
| Retweets | 52 |
| Short tweets | 108 |
| Tweets kept | 606 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2otdqnlb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @confusionm8trix's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1tgtfwi1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1tgtfwi1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/confusionm8trix')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cubytes | 3148c5a0fe80cac1114706f68cb3f3c0e293d59c | 2022-07-03T02:42:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/cubytes | 9 | 1 | transformers | 12,299 | ---
language: en
thumbnail: http://www.huggingtweets.com/cubytes/1656815845890/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1540509467934162944/lgbdvsqz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">|i!i|</div>
<div style="text-align: center; font-size: 14px;">@cubytes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from |i!i|.
| Data | |i!i| |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 1 |
| Short tweets | 112 |
| Tweets kept | 3136 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xojbovmo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cubytes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1x1wffyb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1x1wffyb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cubytes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.