modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Narsil/small2 | 8528e18d7c6f19a6233f143c721d72777b12dbf8 | 2021-08-26T15:50:45.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Narsil | null | Narsil/small2 | 312 | null | transformers | 2,900 | Small change. again. again ? again.
|
cosmicray001/prod-harry | 2756c4ce3be441a45ee5c3fcbdb218c702317192 | 2021-08-29T14:23:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cosmicray001 | null | cosmicray001/prod-harry | 312 | null | transformers | 2,901 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
huggingtweets/pabloiglesias | 3d30ed2ab81feb7ce8eac1df3d7fa03db8c13e4e | 2021-05-22T17:52:55.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pabloiglesias | 312 | 1 | transformers | 2,902 | ---
language: en
thumbnail: https://www.huggingtweets.com/pabloiglesias/1621002350351/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1337047075859668992/vsS3FHEd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pablo Iglesias 🔻</div>
<div style="text-align: center; font-size: 14px;">@pabloiglesias</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pablo Iglesias 🔻.
| Data | Pablo Iglesias 🔻 |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 1157 |
| Short tweets | 191 |
| Tweets kept | 1882 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cxyib7q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pabloiglesias's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/auuc2mv0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/auuc2mv0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pabloiglesias')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wallstreetbets | bac826eea71650beb05ba0da828c1b33554a99d9 | 2021-05-23T04:11:10.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wallstreetbets | 312 | 1 | transformers | 2,903 | ---
language: en
thumbnail: https://www.huggingtweets.com/wallstreetbets/1613146226664/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1355305650432188416/zAPHj9_3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">WallStreetBets 🤖 AI Bot </div>
<div style="font-size: 15px">@wallstreetbets bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@wallstreetbets's tweets](https://twitter.com/wallstreetbets).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 298 |
| Short tweets | 294 |
| Tweets kept | 2642 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hhzrzcsh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wallstreetbets's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gyh32b7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gyh32b7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wallstreetbets')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
persiannlp/mt5-small-parsinlu-translation_en_fa | 881cdbcace427facfb844985d54384e15b65d10a | 2021-09-23T16:20:48.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"machine-translation",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-small-parsinlu-translation_en_fa | 312 | null | transformers | 2,904 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (English -> Persian).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-translation_en_fa"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Praise be to Allah, the Cherisher and Sustainer of the worlds;")
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
run_model("I want to pursue PhD in Computer Science about social network,what is the open problem in social networks?")
```
which should output:
```
['برای الله، یعنی چرنده و سوزان دنیا، تحسین کنید']
['خودش را در سفید پوسته می کند و به صورت عشق برادرانه']
['او از تمام بلاگرها و سازمان هایی که حمایتشان را نشان می داد']
['در طول ماه آوریل و دسامبر در والی فیودورونا نزدیک بیکر']
['من می خواهم در مورد شبکه اجتماعی تحقیقات علوم کامپیوتری را دن']
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
wukevin/tcr-bert | ef65ddcb4e549990e584680e27f9ae2618c884ff | 2021-11-22T08:32:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | wukevin | null | wukevin/tcr-bert | 312 | null | transformers | 2,905 | # TCR transformer model
See our full [codebase](https://github.com/wukevin/tcr-bert) and our [preprint](https://www.biorxiv.org/content/10.1101/2021.11.18.469186v1) for more information.
This model is on:
- Masked language modeling (masked amino acid or MAA modeling)
- Classification across antigen labels from PIRD
If you are looking for a model trained only on MAA, please see our [other model](https://huggingface.co/wukevin/tcr-bert-mlm-only).
Example inputs:
* `C A S S P V T G G I Y G Y T F` (binds to NLVPMVATV CMV antigen)
* `C A T S G R A G V E Q F F` (binds to GILGFVFTL flu antigen) |
NeuML/t5-small-txtsql | acaa03e08f36ceb4b9811dde8ddf7f9c48eaa196 | 2022-04-28T13:15:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | NeuML | null | NeuML/t5-small-txtsql | 312 | 1 | transformers | 2,906 | ---
language: en
widget:
- text: "translate English to SQL: Tell me a feel good story over last day"
example_title: Last day 1
- text: "translate English to SQL: feel good story since yesterday"
example_title: Last day 2
- text: "translate English to SQL: Show me sports stories since yesterday with team equal Red Sox"
example_title: Last day with filter
- text: "translate English to SQL: Breaking news summarized"
example_title: Summary
- text: "translate English to SQL: Breaking news translated to fr"
example_title: Translate to French
inference:
parameters:
max_length: 512
license: apache-2.0
---
# T5-small finedtuned to generate txtai SQL
[T5 small](https://huggingface.co/t5-small) fine-tuned to generate [txtai](https://github.com/neuml/txtai) SQL. This model takes natural language queries and builds txtai-compatible SQL statements.
txtai supports both natural language queries
```
Tell me a feel good story
Show me stories about wildlife
Sports stories about hockey
```
and SQL statements
```
select * from txtai where similar("Tell me a feel good story") and
entry >= date('now', '-1 day')
```
This model bridges the gap between the two and enables natural language queries with filters.
```
Tell me a feel good story since yesterday
Show me sports stories since yesterday with team equal Red Sox
Breaking news summarized
Breaking news translated to fr
```
## Custom query syntax
This model is an example of creating a custom query syntax that can be translated into SQL txtai can understand. Any query syntax can be created. This one supports English but a similar strategy can be deployed to support other languages. Natural language can be translated to functions, query clauses, column selection and more.
See [t5-small-bashsql](https://huggingface.co/NeuML/t5-small-bashsql) for a model that translates Bash like commands into txtai SQL.
## Model training
This model was trained using scripts that can be [found here](https://github.com/neuml/txtai/tree/master/models/txtsql).
Steps to train:
```bash
python generate.py txtsql.csv
python train.py txtsql.csv t5-small-txtsql
```
|
Awsaf/DialoGPT-medium-eren | c6bb9f15f6e529a1e8eb894fe5b10121cfe1d2c1 | 2021-09-21T07:51:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Awsaf | null | Awsaf/DialoGPT-medium-eren | 311 | null | transformers | 2,907 | ---
tags:
- conversational
---
# Eren Yeager DialoGPT Model |
KhanAdeeb/model-tony-stark | e97f3e0a1905d812a7f4c40e2d6db843c471c7a8 | 2021-08-27T15:54:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KhanAdeeb | null | KhanAdeeb/model-tony-stark | 311 | null | transformers | 2,908 | ---
tags:
- conversational
---
# Model for chat bot to talk like tony stark |
bankholdup/rugpt3_song_writer | b36e2b2198ad85d47b3685a9340f9d7404153d33 | 2022-01-25T10:43:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers"
] | text-generation | false | bankholdup | null | bankholdup/rugpt3_song_writer | 311 | 1 | transformers | 2,909 | ---
language:
- ru
tags:
- PyTorch
- Transformers
widget:
- text: "Батя возвращается трезвый, в руке буханка"
example_title: "Example 1"
- text: "Как дела? Как дела? Это новый кадиллак"
example_title: "Example 2"
- text: "4:20 на часах и я дрочу на твоё фото"
example_title: "Example 3"
inference:
parameters:
temperature: 0.9
k: 50
p: 0.95
length: 1500
---
Model based on [ruGPT-3](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) for generating songs.
Tuned on lyrics collected from [genius](https://genius.com/).
Examples of used artists:
* [Oxxxymiron](https://genius.com/artists/Oxxxymiron)
* [Моргенштерн](https://genius.com/artists/Morgenshtern)
* [ЛСП](https://genius.com/artists/Lsp)
* [Гражданская оборона](https://genius.com/artists/Civil-defense)
* [Король и Шут](https://genius.com/artists/The-king-and-the-jester)
* etc |
huggingtweets/cummilkshake-miraiwillsaveus-technobaphomet | 81e888d4af969f3680421e89c62e39a4f025c69b | 2021-11-02T02:39:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cummilkshake-miraiwillsaveus-technobaphomet | 311 | null | transformers | 2,910 | ---
language: en
thumbnail: https://www.huggingtweets.com/cummilkshake-miraiwillsaveus-technobaphomet/1635820776478/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445592317314748423/Y3vOt6Xq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374721840472526851/kzKWx1OS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1448723012514041865/ydq1VOBm_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jumb & isaac & jay z</div>
<div style="text-align: center; font-size: 14px;">@cummilkshake-miraiwillsaveus-technobaphomet</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jumb & isaac & jay z.
| Data | jumb | isaac | jay z |
| --- | --- | --- | --- |
| Tweets downloaded | 3232 | 3153 | 3061 |
| Retweets | 736 | 362 | 83 |
| Short tweets | 594 | 977 | 1230 |
| Tweets kept | 1902 | 1814 | 1748 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3tmpkkja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cummilkshake-miraiwillsaveus-technobaphomet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39yato7e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39yato7e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cummilkshake-miraiwillsaveus-technobaphomet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
textattack/distilbert-base-uncased-MNLI | 2cee56ec53fc7935042c094638345db757eece0d | 2020-06-09T16:47:05.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-MNLI | 311 | null | transformers | 2,911 | Entry not found |
Primer/bart-squad2 | 721720768bf69cea5d1315c4fd4f8dad4e79723f | 2020-12-11T21:30:04.000Z | [
"pytorch",
"bart",
"question-answering",
"en",
"transformers",
"autotrain_compatible"
] | question-answering | false | Primer | null | Primer/bart-squad2 | 310 | 1 | transformers | 2,912 | ---
language: "en"
---
# BART-Squad2
## Model description
BART for extractive (span-based) question answering, trained on Squad 2.0.
F1 score of 87.4.
## Intended uses & limitations
Unfortunately, the Huggingface auto-inference API won't run this model, so if you're attempting to try it through the input box above and it complains, don't be discouraged!
#### How to use
Here's a quick way to get question answering running locally:
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Primer/bart-squad2")
model = AutoModelForQuestionAnswering.from_pretrained("Primer/bart-squad2")
model.to('cuda'); model.eval()
def answer(question, text):
seq = '<s>' + question + ' </s> </s> ' + text + ' </s>'
tokens = tokenizer.encode_plus(seq, return_tensors='pt', padding='max_length', max_length=1024)
input_ids = tokens['input_ids'].to('cuda')
attention_mask = tokens['attention_mask'].to('cuda')
start, end, _ = model(input_ids, attention_mask=attention_mask)
start_idx = int(start.argmax().int())
end_idx = int(end.argmax().int())
print(tokenizer.decode(input_ids[0, start_idx:end_idx]).strip())
# ^^ it will be an empty string if the model decided "unanswerable"
>>> question = "Where does Tom live?"
>>> context = "Tom is an engineer in San Francisco."
>>> answer(question, context)
San Francisco
```
(Just drop the `.to('cuda')` stuff if running on CPU).
#### Limitations and bias
Unknown, no further evaluation has been performed. In a technical sense one big limitation is that it's 1.6G 😬
## Training procedure
`run_squad.py` with:
|param|value|
|---|---|
|batch size|8|
|max_seq_length|1024|
|learning rate|1e-5|
|epochs|2|
Modified to freeze shared parameters and encoder embeddings.
|
dbmdz/convbert-base-turkish-mc4-cased | da111d56f7f4ec0b76d01d7751d69cb80d93c6b5 | 2021-09-23T10:40:43.000Z | [
"pytorch",
"tf",
"convbert",
"fill-mask",
"tr",
"dataset:allenai/c4",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/convbert-base-turkish-mc4-cased | 310 | 1 | transformers | 2,913 | ---
language: tr
license: mit
datasets:
- allenai/c4
---
# 🇹🇷 Turkish ConvBERT model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've trained an (cased) ConvBERT model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ConvBERT
In addition to the ELEC**TR**A base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-mc4-cased")
model = AutoModel.from_pretrained("dbmdz/convbert-base-turkish-mc4-cased")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️ |
unicamp-dl/mt5-base-mmarco-v2 | cc0a949b9f21efcaba45c8cabb998ad02ce8d4e7 | 2022-01-05T23:21:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"t5",
"tensorflow",
"pt-br",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/mt5-base-mmarco-v2 | 310 | null | transformers | 2,914 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mt5-base Reranker finetuned on mMARCO
## Introduction
mt5-base-mmarco-v2 is a mT5-based model fine-tuned on a multilingual translated version of MS MARCO passage dataset. This dataset, named Multi MS MARCO, is formed by 9 complete MS MARCO passages collection in 9 different languages. In the v2 version, the datasets were translated using Google Translate.
Further information about the dataset or the translation method can be found on our paper [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, MT5ForConditionalGeneration
model_name = 'unicamp-dl/mt5-base-mmarco-v2'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use mt5-base-mmarco-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
hfl/cino-large | 03c0611c2dd4b1e82eece4a6ff964510615f2eab | 2022-01-24T09:28:57.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"zh",
"bo",
"kk",
"ko",
"mn",
"ug",
"yue",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | hfl | null | hfl/cino-large | 309 | 6 | transformers | 2,915 | ---
language:
- zh
- bo
- kk
- ko
- mn
- ug
- yue
license: "apache-2.0"
---
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
edbeeching/decision-transformer-gym-hopper-medium | 8224ec324200b150f10287b8c8c525224e62f319 | 2022-06-29T19:15:16.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
] | reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-hopper-medium | 309 | null | transformers | 2,916 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium trajectories sampled from the Gym Hopper environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium trajectories sampled from the Gym Hopper environment.
The following normlization coefficients are required to use this model:
mean = [ 1.311279, -0.08469521, -0.5382719, -0.07201576, 0.04932366, 2.1066856, -0.15017354, 0.00878345, -0.2848186, -0.18540096, -0.28461286]
std = [0.17790751, 0.05444621, 0.21297139, 0.14530419, 0.6124444, 0.85174465, 1.4515252, 0.6751696, 1.536239, 1.6160746, 5.6072536 ]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage. |
Intel/roberta-base-mrpc | f2f8409ff480d8205f88dee4a2788d5cbd6f45b8 | 2022-04-21T05:30:31.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Intel | null | Intel/roberta-base-mrpc | 309 | null | transformers | 2,917 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8774509803921569
- name: F1
type: f1
value: 0.9137931034482758
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-mrpc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5565
- Accuracy: 0.8775
- F1: 0.9138
- Combined Score: 0.8956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
artemnech/enrut5-small | fee8453db72b70be4194c63c5b91c7ce98723263 | 2022-07-05T19:10:03.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | artemnech | null | artemnech/enrut5-small | 309 | null | transformers | 2,918 | Entry not found |
Rick-C137/DialoGPT-small-rick | 07bff26072c6d5d33527ffd9a6180c655e4e4099 | 2022-07-15T00:10:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Rick-C137 | null | Rick-C137/DialoGPT-small-rick | 309 | null | transformers | 2,919 | ---
tags:
- conversational
---
# Rick DialoGPt Model |
Hamhams/DialoGPT-small-rick | 8729e8a982f304dd9bd0861ede88c9f7a42bbd14 | 2022-02-25T04:21:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Hamhams | null | Hamhams/DialoGPT-small-rick | 308 | null | transformers | 2,920 | ---
tags:
- conversational
---
#Rick DialoGPT Model |
Gowtham25/DialoGPT-small-jackie | 1d550d04195b1e7c16fdf4742589abae316ffd1b | 2021-08-28T10:31:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Gowtham25 | null | Gowtham25/DialoGPT-small-jackie | 307 | 1 | transformers | 2,921 | ---
tags:
- conversational
---
# Jackie DialoGPT Model |
elozano/bert-base-cased-news-category | fbdaa11402acf946b10c8ed24fe87017b1f6b726 | 2022-03-01T20:30:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | elozano | null | elozano/bert-base-cased-news-category | 307 | 4 | transformers | 2,922 | Entry not found |
emrecan/bert-base-turkish-cased-allnli_tr | c71182e80ce1bba21d07d1f1dd18ebef5228b0b6 | 2021-12-02T14:58:36.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:mit"
] | zero-shot-classification | false | emrecan | null | emrecan/bert-base-turkish-cased-allnli_tr | 307 | null | transformers | 2,923 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: mit
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5771
- Accuracy: 0.7978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8559 | 0.03 | 1000 | 0.7577 | 0.6798 |
| 0.6612 | 0.07 | 2000 | 0.7263 | 0.6958 |
| 0.6115 | 0.1 | 3000 | 0.6431 | 0.7364 |
| 0.5916 | 0.14 | 4000 | 0.6347 | 0.7407 |
| 0.5719 | 0.17 | 5000 | 0.6317 | 0.7483 |
| 0.5575 | 0.2 | 6000 | 0.6034 | 0.7544 |
| 0.5521 | 0.24 | 7000 | 0.6148 | 0.7568 |
| 0.5393 | 0.27 | 8000 | 0.5931 | 0.7610 |
| 0.5382 | 0.31 | 9000 | 0.5866 | 0.7665 |
| 0.5306 | 0.34 | 10000 | 0.5881 | 0.7594 |
| 0.5295 | 0.37 | 11000 | 0.6120 | 0.7632 |
| 0.5225 | 0.41 | 12000 | 0.5620 | 0.7759 |
| 0.5112 | 0.44 | 13000 | 0.5641 | 0.7769 |
| 0.5133 | 0.48 | 14000 | 0.5571 | 0.7798 |
| 0.5023 | 0.51 | 15000 | 0.5719 | 0.7722 |
| 0.5017 | 0.54 | 16000 | 0.5482 | 0.7844 |
| 0.5111 | 0.58 | 17000 | 0.5503 | 0.7800 |
| 0.4929 | 0.61 | 18000 | 0.5502 | 0.7836 |
| 0.4923 | 0.65 | 19000 | 0.5424 | 0.7843 |
| 0.4894 | 0.68 | 20000 | 0.5417 | 0.7851 |
| 0.4877 | 0.71 | 21000 | 0.5514 | 0.7841 |
| 0.4818 | 0.75 | 22000 | 0.5494 | 0.7848 |
| 0.4898 | 0.78 | 23000 | 0.5450 | 0.7859 |
| 0.4823 | 0.82 | 24000 | 0.5417 | 0.7878 |
| 0.4806 | 0.85 | 25000 | 0.5354 | 0.7875 |
| 0.4779 | 0.88 | 26000 | 0.5338 | 0.7848 |
| 0.4744 | 0.92 | 27000 | 0.5277 | 0.7934 |
| 0.4678 | 0.95 | 28000 | 0.5507 | 0.7871 |
| 0.4727 | 0.99 | 29000 | 0.5603 | 0.7789 |
| 0.4243 | 1.02 | 30000 | 0.5626 | 0.7894 |
| 0.3955 | 1.05 | 31000 | 0.5324 | 0.7939 |
| 0.4022 | 1.09 | 32000 | 0.5322 | 0.7925 |
| 0.3976 | 1.12 | 33000 | 0.5450 | 0.7920 |
| 0.3913 | 1.15 | 34000 | 0.5464 | 0.7948 |
| 0.406 | 1.19 | 35000 | 0.5406 | 0.7958 |
| 0.3875 | 1.22 | 36000 | 0.5489 | 0.7878 |
| 0.4024 | 1.26 | 37000 | 0.5427 | 0.7925 |
| 0.3988 | 1.29 | 38000 | 0.5335 | 0.7904 |
| 0.393 | 1.32 | 39000 | 0.5415 | 0.7923 |
| 0.3988 | 1.36 | 40000 | 0.5385 | 0.7962 |
| 0.3912 | 1.39 | 41000 | 0.5383 | 0.7950 |
| 0.3949 | 1.43 | 42000 | 0.5415 | 0.7931 |
| 0.3902 | 1.46 | 43000 | 0.5438 | 0.7893 |
| 0.3948 | 1.49 | 44000 | 0.5348 | 0.7906 |
| 0.3921 | 1.53 | 45000 | 0.5361 | 0.7890 |
| 0.3944 | 1.56 | 46000 | 0.5419 | 0.7953 |
| 0.3959 | 1.6 | 47000 | 0.5402 | 0.7967 |
| 0.3926 | 1.63 | 48000 | 0.5429 | 0.7925 |
| 0.3854 | 1.66 | 49000 | 0.5346 | 0.7959 |
| 0.3864 | 1.7 | 50000 | 0.5241 | 0.7979 |
| 0.385 | 1.73 | 51000 | 0.5149 | 0.8002 |
| 0.3871 | 1.77 | 52000 | 0.5325 | 0.8002 |
| 0.3819 | 1.8 | 53000 | 0.5332 | 0.8022 |
| 0.384 | 1.83 | 54000 | 0.5419 | 0.7873 |
| 0.3899 | 1.87 | 55000 | 0.5225 | 0.7974 |
| 0.3894 | 1.9 | 56000 | 0.5358 | 0.7977 |
| 0.3838 | 1.94 | 57000 | 0.5264 | 0.7988 |
| 0.3881 | 1.97 | 58000 | 0.5280 | 0.7956 |
| 0.3756 | 2.0 | 59000 | 0.5601 | 0.7969 |
| 0.3156 | 2.04 | 60000 | 0.5936 | 0.7925 |
| 0.3125 | 2.07 | 61000 | 0.5898 | 0.7938 |
| 0.3179 | 2.11 | 62000 | 0.5591 | 0.7981 |
| 0.315 | 2.14 | 63000 | 0.5853 | 0.7970 |
| 0.3122 | 2.17 | 64000 | 0.5802 | 0.7979 |
| 0.3105 | 2.21 | 65000 | 0.5758 | 0.7979 |
| 0.3076 | 2.24 | 66000 | 0.5685 | 0.7980 |
| 0.3117 | 2.28 | 67000 | 0.5799 | 0.7944 |
| 0.3108 | 2.31 | 68000 | 0.5742 | 0.7988 |
| 0.3047 | 2.34 | 69000 | 0.5907 | 0.7921 |
| 0.3114 | 2.38 | 70000 | 0.5723 | 0.7937 |
| 0.3035 | 2.41 | 71000 | 0.5944 | 0.7955 |
| 0.3129 | 2.45 | 72000 | 0.5838 | 0.7928 |
| 0.3071 | 2.48 | 73000 | 0.5929 | 0.7949 |
| 0.3061 | 2.51 | 74000 | 0.5794 | 0.7967 |
| 0.3068 | 2.55 | 75000 | 0.5892 | 0.7954 |
| 0.3053 | 2.58 | 76000 | 0.5796 | 0.7962 |
| 0.3117 | 2.62 | 77000 | 0.5763 | 0.7981 |
| 0.3062 | 2.65 | 78000 | 0.5852 | 0.7964 |
| 0.3004 | 2.68 | 79000 | 0.5793 | 0.7966 |
| 0.3146 | 2.72 | 80000 | 0.5693 | 0.7985 |
| 0.3146 | 2.75 | 81000 | 0.5788 | 0.7982 |
| 0.3079 | 2.79 | 82000 | 0.5726 | 0.7978 |
| 0.3058 | 2.82 | 83000 | 0.5677 | 0.7988 |
| 0.3055 | 2.85 | 84000 | 0.5701 | 0.7982 |
| 0.3049 | 2.89 | 85000 | 0.5809 | 0.7970 |
| 0.3044 | 2.92 | 86000 | 0.5741 | 0.7986 |
| 0.3057 | 2.96 | 87000 | 0.5743 | 0.7980 |
| 0.3081 | 2.99 | 88000 | 0.5771 | 0.7978 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
howey/electra-base-sst2 | 95b74e849ef5c63df384f6363d0d8fdbc3725bf3 | 2021-04-16T12:45:46.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | howey | null | howey/electra-base-sst2 | 307 | null | transformers | 2,924 | Entry not found |
huggingtweets/lithros | e6884527a529aaad055d2697d6c52afc814a15ec | 2021-05-22T12:20:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lithros | 307 | null | transformers | 2,925 | ---
language: en
thumbnail: https://www.huggingtweets.com/lithros/1616778118561/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1345210731998937088/LaH3WCVy_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Scott Hansen 🤖 AI Bot </div>
<div style="font-size: 15px">@lithros bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@lithros's tweets](https://twitter.com/lithros).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 279 |
| Short tweets | 505 |
| Tweets kept | 2462 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1f7bjpqi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lithros's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1j5ekaf6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1j5ekaf6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lithros')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
edumunozsala/beto_sentiment_analysis_es | a89bd5e7a939f8066ee9d0ab4a5e74cbeaaf4ee1 | 2022-07-29T09:17:43.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"dataset:IMDbreviews_es",
"transformers",
"sagemaker",
"beto",
"TextClassification",
"SentimentAnalysis",
"license:apache-2.0",
"model-index"
] | text-classification | false | edumunozsala | null | edumunozsala/beto_sentiment_analysis_es | 307 | null | transformers | 2,926 | ---
language: es
tags:
- sagemaker
- beto
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: beto_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
metrics:
- name: Accuracy
type: accuracy
value: 0.9101333333333333
- name: F1 Score
type: f1
value: 0.9088450094671354
- name: Precision
type: precision
value: 0.9105691056910569
- name: Recall
type: recall
value: 0.9071274298056156
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
# Model beto_sentiment_analysis_es
## **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **BETO** which is a BERT-base model pre-trained on a spanish corpus. BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique.
**BETO Citation**
[Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf)
```
@inproceedings{CaneteCFP2020,
title={Spanish Pre-Trained BERT Model and Evaluation Data},
author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge},
booktitle={PML4DC at ICLR 2020},
year={2020}
}
```
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Intended uses & limitations
This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews.
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"dccuchile/bert-base-spanish-wwm-uncased\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
- Accuracy = 0.9101333333333333
- F1 Score = 0.9088450094671354
- Precision = 0.9105691056910569
- Recall = 0.9071274298056156
## Test results
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/beto_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/beto_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
KENNETHFOO/DialoGPT-medium-harrypotter | 037a147f5630db6a3321b3722a0d2099ce1d8f0b | 2021-10-12T02:32:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KENNETHFOO | null | KENNETHFOO/DialoGPT-medium-harrypotter | 306 | null | transformers | 2,927 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
cheulyop/wav2vec2-large-xlsr-ksponspeech_1-20 | ae5ea835e1ddc6ec0406ffb53906c338b1a476f0 | 2021-07-06T00:26:00.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | cheulyop | null | cheulyop/wav2vec2-large-xlsr-ksponspeech_1-20 | 306 | null | transformers | 2,928 | Entry not found |
gorkemgoknar/gpt2-turkish-writer | 214c737f0831c9befe6d87e4b8300d4e09231063 | 2021-09-22T08:29:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"tr",
"dataset:wikipedia-turkish",
"dataset:custom-book-corpus",
"transformers",
"turkish",
"aiwriter",
"finetuned",
"license:apache-2.0"
] | text-generation | false | gorkemgoknar | null | gorkemgoknar/gpt2-turkish-writer | 306 | 2 | transformers | 2,929 | ---
language:
- tr
thumbnail:
tags:
- gpt2
- turkish
- aiwriter
- finetuned
license: apache-2.0
datasets:
- wikipedia-turkish
- custom-book-corpus
metrics:
- perplexity
- accuracy
widget:
- text: Bir zaman topu olan ama köpeği olmayan bir çocuk vardı. Parkta
context: ''
- text: 'Uzun uzun sahile doğru baktı. Düşündüklerinden '
context: ''
- text: Çok uzun zaman önce galaksinin uzak bir köşesinde...
context: ''
- text: "'Bugün kendimi çok hasta hissediyorum' dedi. Karşısında "
context: ''
---
# Turkish AI Writer based on GPT2-Small
# Türkçe Yapay Zeka Yazarı
## Model description
This model is enhanced version of gpt2-small-turkish finetuned version. In addition to 28-10-2020 Wikipedia Turkish article dump this model is trained with more than 400 classic novels and plays in Turkish (Including Dostoyevski, Shaekspeare, Dumas)
Base work has been done on Pierre Guillou tutorial as on this page.
(https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb)
Note that Since Turkish language is not close to English as in Porteguese instead of training last 2 layers, last 3 layers are trained.
Code is converted to work with Fastai 2.X .
Using Google Colab for training.
Current accuracy 36.3 % , Perplexity : 44.75
Demo (using CPU inference) is available on: http://www.metayazar.com
Models are available:
* [gpt2-small-tuned-tr] (https://huggingface.co/gorkemgoknar/gpt2-small-turkish)
* [gpt2-small-turkish-writer] (https://huggingface.co/gorkemgoknar/gpt2-turkish-writer)
## Intended uses & limitations
#### How to use
#### Install
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
tokenizer = AutoTokenizer.from_pretrained("gorkemgoknar/gpt2-turkish-writer")
model = AutoModelWithLMHead.from_pretrained("gorkemgoknar/gpt2-turkish-writer")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
#### Generate 1 word
```python
# input sequence
text = "Bu yazıyı bilgisayar yazdı."
inputs = tokenizer(text, return_tensors="pt")
# model output
outputs = model(**inputs, labels=inputs["input_ids"])
loss, logits = outputs[:2]
predicted_index = torch.argmax(logits[0, -1, :]).item()
predicted_text = tokenizer.decode([predicted_index])
# results
print('input text:', text)
print('predicted text:', predicted_text)
# input text:
# predicted text:
```
#### Generate Full Sequence
```python
# input sequence
text = "Bu yazıyı bilgisayar yazdı."
inputs = tokenizer(text, return_tensors="pt")
# model output using Top-k sampling text generation method
sample_outputs = model.generate(inputs.input_ids,
pad_token_id=50256,
do_sample=True,
max_length=50, # put the token number you want
top_k=40,
num_return_sequences=1)
# generated sequence
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
# >> Generated text
#
```
#### Limitations and bias
The training data used for this model come from Turkish Wikipedia and books. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Also not much pre-processing was done on books hence chapter names and page numbers can be seen on some cases. This is a work in progress.
## Training data
Wikipedia Turkish article dump as of 28-10-2020
Turkish book dataset of >400 classic novels
## Training procedure
## Eval results
| epoch |train_loss |valid_loss |accuracy |perplexity |time |
| ----- | -------- |--------- | ---------- | --------- | ----- |
|0 |4.497828 |4.549605 |0.277328 |94.595070 |2:09:58|
|1 |4.503929 |4.519456 |0.275071 |91.785645 |2:04:30|
|2 |3.612716 |3.921146 |0.344802 |50.458256 |2:03:22|
|3 |3.777645 |4.072006 |0.326130 |58.674530 |1:56:14|
|4 |2.934462 |3.801303 |0.363719 |44.759476 |1:58:55|
Note: 1cycle rule training is used and epochs are at different times
```
|
huggingtweets/staidindoors | 748cebfca3dfac06371946d060cb4fb1bc45cb5a | 2021-07-23T23:26:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/staidindoors | 306 | null | transformers | 2,930 | ---
language: en
thumbnail: https://www.huggingtweets.com/staidindoors/1627082764759/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1418465930456092672/-iGnfQyn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">staid</div>
<div style="text-align: center; font-size: 14px;">@staidindoors</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from staid.
| Data | staid |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 919 |
| Short tweets | 611 |
| Tweets kept | 1710 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1crkj9xo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @staidindoors's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/it5qlwh5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/it5qlwh5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/staidindoors')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
remotejob/tweetsDISTILGPT2fi_v4 | f7ec257f7c8554544d8853d6403bb7ddf48c50f7 | 2021-11-29T22:22:30.000Z | [
"pytorch",
"rust",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | remotejob | null | remotejob/tweetsDISTILGPT2fi_v4 | 306 | null | transformers | 2,931 | Entry not found |
IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese | 403c195815af2b23fcc12c2a3e122bd42d2b6d84 | 2022-07-25T06:26:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"clip",
"zh",
"image-text",
"feature-extraction",
"license:apache-2.0"
] | feature-extraction | false | IDEA-CCNL | null | IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese | 306 | null | transformers | 2,932 | ---
license: apache-2.0
# inference: false
# pipeline_tag: zero-shot-image-classification
pipeline_tag: feature-extraction
# inference:
# parameters:
tags:
- clip
- zh
- image-text
- feature-extraction
---
# Model Details
This model is a Chinese CLIP model trained on [Noah-Wukong Dataset](https://wukong-dataset.github.io/wukong-dataset/), which contains about 100M Chinese image-text pairs. We use ViT-L-14 from [openAI](https://github.com/openai/CLIP) as image encoder and Chinese pre-trained language model [chinese-roberta-wwm-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) as text encoder. We freeze the image encoder and only finetune the text encoder. The model was trained for 10 epochs and it takes about 5 days with 16 A100 GPUs. **This is a beta version, We will continueously update this model**
# Taiyi (太乙)
Taiyi models are a branch of the Fengshenbang (封神榜) series of models. The models in Taiyi are pre-trained with multimodal pre-training strategies. We will release more image-text model trained on Chinese dataset and benefit the Chinese community.
# Usage
```python3
from PIL import Image
import requests
import clip
import torch
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer
from transformers import CLIPProcessor, CLIPModel
import numpy as np
query_texts = ["一只猫", "一只狗",'两只猫', '两只老虎','一只老虎'] # 这里是输入文本的,可以随意替换。
# 加载Taiyi 中文 text encoder
text_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese")
text_encoder = BertForSequenceClassification.from_pretrained("IDEA-CCNL/Taiyi-CLIP-Roberta-large-326M-Chinese").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000039769.jpg" # 这里可以换成任意图片的url
# 加载CLIP的image encoder
clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text).logits
# 归一化
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
# Evaluation
### Zero-Shot Classification
| model | dataset | Top1 | Top5 |
| ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-326M-Chinese | ImageNet1k-CN | 51.72% | 78.46% |
### Zero-Shot Text-to-Image Retrieval
| model | dataset | Top1 | Top5 | Top10 |
| ---- | ---- | ---- | ---- | ---- |
| Taiyi-CLIP-Roberta-326M-Chinese | Flickr30k-CNA-test | 51.08 % | 78.20 % | 85.94 % |
| Taiyi-CLIP-Roberta-326M-Chinese | COCO-CN-test | 52.40 % | 80.50 % | 89.60 % |
| Taiyi-CLIP-Roberta-326M-Chinese | wukong50k | 60.16 % | 90.36% | 95.61% |
# Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
funnel-transformer/intermediate | 84f70d2d870af3e59a07cf94df095aa5a0741e16 | 2020-12-11T21:40:25.000Z | [
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | funnel-transformer | null | funnel-transformer/intermediate | 305 | null | transformers | 2,933 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer intermediate model (B6-6-6 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = FunneModel.from_pretrained("funnel-transformer/intermediate")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = TFFunnelModel.from_pretrained("funnel-transformer/intermediatesmall")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
Lisia/DialoGPT-small-connor | 514d580f477d6cdd4044b988e24f08afc5fd3dec | 2022-04-26T17:38:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Lisia | null | Lisia/DialoGPT-small-connor | 305 | null | transformers | 2,934 | ---
tags:
- conversational
---
# Connor DialoGPT Model |
Shakerlicious/DialoGPT-small-raquelbot | 837b231a849f629f93ec7fc43e3a4234d51e7aef | 2022-05-05T13:21:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Shakerlicious | null | Shakerlicious/DialoGPT-small-raquelbot | 305 | null | transformers | 2,935 | ---
tags:
- conversational
---
# Raquel DialoGPT Model |
Fu10k/DialoGPT-medium-Rick | b7541f8a67ecb56110267c0f035ca674ac41e556 | 2021-09-02T07:16:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Fu10k | null | Fu10k/DialoGPT-medium-Rick | 304 | null | transformers | 2,936 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
Jeffrey/DialoGPT-small-Jeffrey | 32496ec0c667edb073e38011648995dff587f36b | 2021-09-08T15:53:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jeffrey | null | Jeffrey/DialoGPT-small-Jeffrey | 304 | null | transformers | 2,937 | ---
tags:
- conversational
---
|
KOSTAS/DialoGPT-small-Cleverbot | 1099dd14110a540295cf98a9f4a381554a1f7572 | 2021-12-07T12:41:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KOSTAS | null | KOSTAS/DialoGPT-small-Cleverbot | 304 | null | transformers | 2,938 | ---
tags:
- conversational
---
# Clever bot DialoGPT Model |
VulcanBin/DialoGPT-small-cortana | 4b1bae31adcf4fed3f46a5a205c576111bf38374 | 2021-09-30T16:48:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | VulcanBin | null | VulcanBin/DialoGPT-small-cortana | 304 | null | transformers | 2,939 | ---
tags:
- conversational
---
#Cortana DialoGPT Model |
facebook/wav2vec2-large-robust-ft-libri-960h | 2a769b1f894980d190d33e0ec1678da3f411cfe2 | 2021-11-04T14:15:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:libri_light",
"dataset:common_voice",
"dataset:switchboard",
"dataset:fisher",
"dataset:librispeech_asr",
"arxiv:2104.01027",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-robust-ft-libri-960h | 304 | 4 | transformers | 2,940 | ---
language: en
datasets:
- libri_light
- common_voice
- switchboard
- fisher
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
license: apache-2.0
---
# Wav2Vec2-Large-Robust finetuned on Librispeech
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/).
This model is a fine-tuned version of the [wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) model.
It has been pretrained on:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
and subsequently been finetuned on 960 hours of
- [Librispeech](https://huggingface.co/datasets/librispeech_asr): open-source read-out audio data.
When using the model make sure that your speech input is also sampled at 16Khz.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-robust-ft-libri-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-robust-ft-libri-960h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = processor(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
``` |
huggingtweets/averagesmasher | dff66b2f60a9b0e3e41a13fdc703d8667f7a3706 | 2021-07-10T13:47:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/averagesmasher | 304 | null | transformers | 2,941 | ---
language: en
thumbnail: https://www.huggingtweets.com/averagesmasher/1625924846625/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1368753714568327168/oh6BSjqX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AverageVermontSmasher</div>
<div style="text-align: center; font-size: 14px;">@averagesmasher</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AverageVermontSmasher.
| Data | AverageVermontSmasher |
| --- | --- |
| Tweets downloaded | 41 |
| Retweets | 0 |
| Short tweets | 2 |
| Tweets kept | 39 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/auyr340s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @averagesmasher's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qnfjchi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qnfjchi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/averagesmasher')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/conanobrien | 7f1449f15d0b4c1eebb263f73f8fdee72f04749f | 2021-05-21T23:19:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/conanobrien | 304 | null | transformers | 2,942 | ---
language: en
thumbnail: https://www.huggingtweets.com/conanobrien/1606267014440/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/730612231021322240/Rl0_QYhL_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Conan O'Brien 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@conanobrien bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@conanobrien's tweets](https://twitter.com/conanobrien).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3241</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>31</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>18</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>3192</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2fdxdxdd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @conanobrien's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ffkm78bf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ffkm78bf/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/conanobrien'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/notmikeharlow | e03e479a67b2710eeb61ff0e0c7f69030b3fecff | 2021-08-28T16:24:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/notmikeharlow | 304 | null | transformers | 2,943 | ---
language: en
thumbnail: https://www.huggingtweets.com/notmikeharlow/1630167789938/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425404754344267778/QtQaXGRF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mike Harlow</div>
<div style="text-align: center; font-size: 14px;">@notmikeharlow</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mike Harlow.
| Data | Mike Harlow |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 300 |
| Short tweets | 371 |
| Tweets kept | 2561 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xakho7a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @notmikeharlow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15adesnt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15adesnt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/notmikeharlow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ozcangundes/mt5-small-turkish-summarization | 817e701bb00173a1b433d7bf5d0d740d12bec569 | 2021-09-22T09:31:27.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"tr",
"dataset:MLSUM",
"arxiv:2004.14900",
"transformers",
"license:mit",
"summarization",
"autotrain_compatible"
] | summarization | false | ozcangundes | null | ozcangundes/mt5-small-turkish-summarization | 304 | 5 | transformers | 2,944 | ---
language: tr
datasets:
- MLSUM
pipeline_tag: summarization
license: mit
---
# mT5-small based Turkish Summarization System
[Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [MLSUM Turkish news dataset](https://github.com/recitalAI/MLSUM) for **Summarization** downstream task by using Pytorch Lightning.⚡
mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. The model is trained with 10 epochs, 8 batch size and 10e-4 learning rate. It took almost 4 hours. The max news length is kept as 784 and max summary length is determined as 64.
**Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task.
## Dataset
MLSUM dataset has more than 250K Turkish news with their related summaries. Since the mT5 model size and vocabulary is so large, 20K data is used for training and 4K data is used for validation. For more information about the dataset, please read this [great paper](https://arxiv.org/abs/2004.14900).
## Usage 🚀
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-summarization")
def generate_summary(main_news):
source_encoding=tokenizer(
main_news,
max_length=784,
padding="max_length",
truncation=True,
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt")
generated_ids=model.generate(
input_ids=source_encoding["input_ids"],
attention_mask=source_encoding["attention_mask"],
num_beams=2,
max_length=120,
repetition_penalty=2.5,
length_penalty=2.0,
early_stopping=True,
use_cache=True
)
preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for gen_id in generated_ids]
return "".join(preds)
```
### Example 1
```python
main_news= "Final etabının üçüncü karşılaşması 29 Nisan Pazartesi günü saat 18.00 ’ de Burhan Felek
Voleybol Salonu ’ nda oynanacak . Sezonu FIVB Kulüpler Dünya Şampiyonluğu ile açan ve CEV
Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı ,
Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı VakıfBank
Spor Sarayı'nda 16-25 , 25-10 , 25-18 ve 25-17'lik setlerle 3-1 mağlup ederek seride durumu
1-1 ' e getirdi . İlk setini 25-16 kaybettiği karşılaşmanın ikinci setinde etkili servisler
kullanan sarı-siyahlılar , teknik molasına 12-5 önde girdiği seti 25-10 almayı başardı .
Etkili servis performansını üçüncü sette de sürdüren VakıfBank , teknik molasına 12-5 önde
girdiği seti 25-18 alarak , karşılaşmada 2-1 öne geçti . Dördüncü sette rakibinin geri dönüşüne
izin vermeyen VakıfBank , seti 25-17 , maçı da 3-1 kazanarak seride durumu eşitledi."
generate_summary(main_news)
#original summary -> "Vestel Venus Sultanlar Ligi final etabı ikinci karşılaşmasında VakıfBank
kendi sahasında Eczacıbaşı VitrA'yı 3-1 mağlup etti ve seride durumu 1-1 ' e getirdi ."
#output -> "CEV Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı,
Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı 3-1 mağlup
ederek seride durumu 1-1'e getirdi."
```
### Example 2
```python
main_news="2023'te yerli tank motoru : Bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını
ifade eden Öztürk , şu değerlendirmelerde bulundu : `` Bin 500 beygirlik , şanzımanıyla beraber
motoru yerlileştirmeye çalışıyoruz . Bu da bir aksilik çıkmazsa ilk tankımızın üzerine
2023'te koyacağız . Bundan sonra hiçbir ülkeye bağımlılığımız kalmadan bu araçları üretmeye
devam edeceğiz . Sorumluluğumuzun ağır olduğunu biliyoruz . Ülkemize hizmet etmeye çalışıyoruz .
Bunu daha da ileriye götürmek için elimizden gelen çabayı sarf ediyoruz . Ama bu tek başınıza
yapılan bir operasyon değil . Türkiye'deki yerli firmalarla beraber ortaklaşa bu işi yürütmeye çalışıyoruz."
generate_summary(main_news)
#output -> "TÜRKİYE'de bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını belirten Öztürk,
`` Bin 500 beygirlik, şanzımanıyla beraber motoru yerlileştirmeye çalışıyoruz. Bu da bir
aksilik çıkmazsa ilk tankımızın üzerine 2023'te koyacağız.'' dedi."
```
Created by Özcan Gündeş ✌️
---
Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a>
Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a>
Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a>
Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
|
pompeiifreckles/DialoGPT-medium-Rick | 6ee3c4a5075204b6a1137e068baabaa0890473ec | 2021-10-04T00:45:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | pompeiifreckles | null | pompeiifreckles/DialoGPT-medium-Rick | 304 | null | transformers | 2,945 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model |
stanford-crfm/eowyn-gpt2-medium-x777 | 68467f5363e6ea42771ae8e686a58b9a376b3578 | 2022-06-20T10:42:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stanford-crfm | null | stanford-crfm/eowyn-gpt2-medium-x777 | 304 | null | transformers | 2,946 | Entry not found |
unicamp-dl/translation-pt-en-t5 | 02844c590f318229e0e5332fafb74ab514a9a05b | 2021-10-11T03:47:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"pt",
"dataset:EMEA",
"dataset:ParaCrawl 99k",
"dataset:CAPES",
"dataset:Scielo",
"dataset:JRC-Acquis",
"dataset:Biomedical Domain Corpora",
"transformers",
"translation",
"autotrain_compatible"
] | translation | false | unicamp-dl | null | unicamp-dl/translation-pt-en-t5 | 304 | 5 | transformers | 2,947 | ---
language:
- en
- pt
datasets:
- EMEA
- ParaCrawl 99k
- CAPES
- Scielo
- JRC-Acquis
- Biomedical Domain Corpora
tags:
- translation
metrics:
- bleu
---
# Introduction
This repository brings an implementation of T5 for translation in PT-EN tasks using a modest hardware setup. We propose some changes in tokenizator and post-processing that improves the result and used a Portuguese pretrained model for the translation. You can collect more informations in [our repository](https://github.com/unicamp-dl/Lite-T5-Translation). Also, check [our paper](https://aclanthology.org/2020.wmt-1.90.pdf)!
# Usage
Just follow "Use in Transformers" instructions. It is necessary to add a few words before to define the task to T5.
You can also create a pipeline for it. An example with the phrase " Eu gosto de comer arroz" is:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/translation-pt-en-t5")
model = AutoModelForSeq2SeqLM.from_pretrained("unicamp-dl/translation-pt-en-t5")
pten_pipeline = pipeline('text2text-generation', model=model, tokenizer=tokenizer)
pten_pipeline("translate Portuguese to English: Eu gosto de comer arroz.")
```
# Citation
```bibtex
@inproceedings{lopes-etal-2020-lite,
title = "Lite Training Strategies for {P}ortuguese-{E}nglish and {E}nglish-{P}ortuguese Translation",
author = "Lopes, Alexandre and
Nogueira, Rodrigo and
Lotufo, Roberto and
Pedrini, Helio",
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.wmt-1.90",
pages = "833--840",
}
``` |
Jeongyeon/donut_ch_ticket | 27e34de3e1cd3e1b47c9805b69aa4369655168a3 | 2022-07-05T09:45:34.000Z | [
"pytorch",
"donut",
"transformers"
] | null | false | Jeongyeon | null | Jeongyeon/donut_ch_ticket | 304 | null | transformers | 2,948 | Entry not found |
Batsy24/DialoGPT-small-Twilight_EdBot | ce2b867c46b682a52e5c50b0c06a7ca072b0a13a | 2021-08-26T20:02:58.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Batsy24 | null | Batsy24/DialoGPT-small-Twilight_EdBot | 303 | null | transformers | 2,949 | ---
tags:
- conversational
---
# Twilight Edward DialoGPT Model |
JDS22/DialoGPT-medium-HarryPotterBot | db8a6f07d0d2b82bc1e87f178de340fefc2622cd | 2021-09-26T12:14:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | JDS22 | null | JDS22/DialoGPT-medium-HarryPotterBot | 303 | null | transformers | 2,950 | ---
tags:
- conversational
---
@ Harry Potter DialoGPT Model |
Ryanar/DialoGPT-medium-Zelda | f9f5d4a81083f6cd85ee2c59c61fa0b362411972 | 2021-09-15T22:13:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ryanar | null | Ryanar/DialoGPT-medium-Zelda | 303 | null | transformers | 2,951 | ---
tags:
- conversational
---
# Zeldabot |
anweasha/DialoGPT-small-Jake | 602bbfb65310f9808c653ffab6aba83c26b5ee87 | 2022-02-12T16:10:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | anweasha | null | anweasha/DialoGPT-small-Jake | 303 | null | transformers | 2,952 | ---
tags:
- conversational
---
# Jake Peralta DialoGPT Model |
fractalego/fact-checking | 856850bd7e1527bad65151c5cd7d0d7f421db25a | 2021-12-11T16:12:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | fractalego | null | fractalego/fact-checking | 303 | 1 | transformers | 2,953 | ## Fact checking
This generative model - trained on FEVER - aims to predict whether a claim is consistent with the provided evidence.
### Installation and simple usage
One quick way to install it is to type
```bash
pip install fact_checking
```
and then use the following code:
```python
from transformers import (
GPT2LMHeadModel,
GPT2Tokenizer,
)
from fact_checking import FactChecker
_evidence = """
Justine Tanya Bateman (born February 19, 1966) is an American writer, producer, and actress . She is best known for her regular role as Mallory Keaton on the sitcom Family Ties (1982 -- 1989). Until recently, Bateman ran a production and consulting company, SECTION 5 . In the fall of 2012, she started studying computer science at UCLA.
"""
_claim = 'Justine Bateman is a poet.'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
fact_checking_model = GPT2LMHeadModel.from_pretrained('fractalego/fact-checking')
fact_checker = FactChecker(fact_checking_model, tokenizer)
is_claim_true = fact_checker.validate(_evidence, _claim)
print(is_claim_true)
```
which gives the output
```bash
False
```
### Probabilistic output with replicas
The output can include a probabilistic component, obtained by iterating a number of times the output generation.
The system generates an ensemble of answers and groups them by Yes or No.
For example, one can ask
```python
from transformers import (
GPT2LMHeadModel,
GPT2Tokenizer,
)
from fact_checking import FactChecker
_evidence = """
Jane writes code for Huggingface.
"""
_claim = 'Jane is an engineer.'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
fact_checking_model = GPT2LMHeadModel.from_pretrained('fractalego/fact-checking')
fact_checker = FactChecker(fact_checking_model, tokenizer)
is_claim_true = fact_checker.validate_with_replicas(_evidence, _claim)
print(is_claim_true)
```
with output
```bash
{'Y': 0.95, 'N': 0.05}
```
### Score on FEVER
The predictions are evaluated on a subset of the FEVER dev dataset,
restricted to the SUPPORTING and REFUTING options:
| precision | recall | F1|
| --- | --- | --- |
|0.94|0.98|0.96|
These results should be taken with many grains of salt. This is still a work in progress,
and there might be leakage coming from the underlining GPT2 model unnaturally raising the scores.
|
huggingtweets/cocojonesspace | 75da0a2e04d00ea4cfa9f743777b21dc1bfdaa47 | 2021-05-21T23:07:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cocojonesspace | 303 | null | transformers | 2,954 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1316993924297334784/rFkGii31_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cody 🤖 AI Bot </div>
<div style="font-size: 15px">@cocojonesspace bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@cocojonesspace's tweets](https://twitter.com/cocojonesspace).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 609 |
| Retweets | 439 |
| Short tweets | 37 |
| Tweets kept | 133 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rf16z1e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cocojonesspace's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ppd5jtm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ppd5jtm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cocojonesspace')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dynamic_proxy | 44d1b0a2a7b3ad3488581e41ce8bd6939e12b531 | 2021-05-22T02:28:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dynamic_proxy | 303 | null | transformers | 2,955 | ---
language: en
thumbnail: https://www.huggingtweets.com/dynamic_proxy/1616667039166/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364933895234453506/ljzT7r4B_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">a gnarled woodland spirit 🤖 AI Bot </div>
<div style="font-size: 15px">@dynamic_proxy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@dynamic_proxy's tweets](https://twitter.com/dynamic_proxy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 204 |
| Short tweets | 147 |
| Tweets kept | 2892 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19d2wxay/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dynamic_proxy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ce0iq2v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ce0iq2v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dynamic_proxy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tommy19970714/translation-japanese | a7f76b74d03aa1f2ca7c65b2c089240efe4d4f72 | 2021-04-28T03:59:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
] | translation | false | tommy19970714 | null | tommy19970714/translation-japanese | 303 | 2 | transformers | 2,956 | ---
tags:
- translation
---
### japanese translation
* source languages: ja
* target languages: en
* model: transformer-align
* pre-processing: normalization + SentencePiece
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ja.en | 41.7 | 0.589 |
|
huggingtweets/garymarcus | d01b19c1affcd5c820da9554fc9d7e3b681405f4 | 2022-03-22T20:19:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/garymarcus | 303 | null | transformers | 2,957 | ---
language: en
thumbnail: http://www.huggingtweets.com/garymarcus/1647980350256/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1501714358644051970/2qQM-yMC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gary Marcus 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@garymarcus</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gary Marcus 🇺🇦.
| Data | Gary Marcus 🇺🇦 |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 1356 |
| Short tweets | 155 |
| Tweets kept | 1729 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ujbkvh2a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @garymarcus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1b5cn6fg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1b5cn6fg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/garymarcus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DaNLP/da-bert-hatespeech-detection | 5431999c8eedd46c9aa2c619bdfafa7aa7aad1f7 | 2021-11-15T14:41:46.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"da",
"dataset:social media",
"transformers",
"hatespeech",
"license:cc-by-sa-4.0"
] | text-classification | false | DaNLP | null | DaNLP/da-bert-hatespeech-detection | 302 | 1 | transformers | 2,958 | ---
language:
- da
tags:
- bert
- pytorch
- hatespeech
license: cc-by-sa-4.0
datasets:
- social media
metrics:
- f1
widget:
- text: "Senile gamle idiot"
---
# Danish BERT for hate speech (offensive language) detection
The BERT HateSpeech model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-hatespeech-detection")
tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-hatespeech-detection")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
byeongal/Ko-DialoGPT | bb5af96ba07e98ccb8b8c50728f7de849ccd8fc9 | 2021-09-23T13:43:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ko",
"transformers",
"conversational",
"license:cc-by-nc-sa-4.0"
] | conversational | false | byeongal | null | byeongal/Ko-DialoGPT | 302 | 1 | transformers | 2,959 | ---
language: ko
tags:
- gpt2
- conversational
license: cc-by-nc-sa-4.0
---
## Ko-DialoGPT
### How to use
```python
from transformers import PreTrainedTokenizerFast, GPT2LMHeadModel
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PreTrainedTokenizerFast.from_pretrained('byeongal/Ko-DialoGPT')
model = GPT2LMHeadModel.from_pretrained('byeongal/Ko-DialoGPT').to(device)
past_user_inputs = []
generated_responses = []
while True:
user_input = input(">> User:")
if user_input == 'bye':
break
text_idx = tokenizer.encode(user_input + tokenizer.eos_token, return_tensors='pt')
for i in range(len(generated_responses)-1, len(generated_responses)-3, -1):
if i < 0:
break
encoded_vector = tokenizer.encode(generated_responses[i] + tokenizer.eos_token, return_tensors='pt')
if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000:
text_idx = torch.cat([encoded_vector, text_idx], dim=-1)
else:
break
encoded_vector = tokenizer.encode(past_user_inputs[i] + tokenizer.eos_token, return_tensors='pt')
if text_idx.shape[-1] + encoded_vector.shape[-1] < 1000:
text_idx = torch.cat([encoded_vector, text_idx], dim=-1)
else:
break
text_idx = text_idx.to(device)
inference_output = model.generate(
text_idx,
max_length=1000,
num_beams=5,
top_k=20,
no_repeat_ngram_size=4,
length_penalty=0.65,
repetition_penalty=2.0,
)
inference_output = inference_output.tolist()
bot_response = tokenizer.decode(inference_output[0][text_idx.shape[-1]:], skip_special_tokens=True)
print(f"Bot: {bot_response}")
past_user_inputs.append(user_input)
generated_responses.append(bot_response)
```
### Reference
* [SKT-KoGPT2](https://huggingface.co/skt/kogpt2-base-v2)
* [KETI R&D 데이터](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-008)
* [한국어 대화 요약](https://aihub.or.kr/aidata/30714)
|
cookirei/DialoGPT-medium-Joreyar | 32643ec667fa938b1cfe5febf97b070f62a5ccfb | 2021-08-28T18:16:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cookirei | null | cookirei/DialoGPT-medium-Joreyar | 302 | null | transformers | 2,960 | ---
tags:
- conversational
---
# Joreyar DialoGPT Model |
dats/DialoGPT-small-harrypotter | bf6ad973707088ace8140f18b0fcdc4f7b139bed | 2021-08-29T15:12:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | dats | null | dats/DialoGPT-small-harrypotter | 302 | null | transformers | 2,961 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
felinecity/DioloGPT-small-LisaBot | 1eef7fe735754c252c2e25a3fe917255c166fc09 | 2022-01-12T08:10:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | felinecity | null | felinecity/DioloGPT-small-LisaBot | 302 | null | transformers | 2,962 | ---
tags:
- conversational
---
# DioloGPT LisaBot model |
funnel-transformer/xlarge | a57ed38432204c958ec9df4b8fc999176d10005e | 2020-12-11T21:40:51.000Z | [
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | funnel-transformer | null | funnel-transformer/xlarge | 302 | null | transformers | 2,963 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer xlarge model (B10-10-10 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = FunneModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = TFFunnelModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
huggingtweets/americanpineapp | 96577588d3e89e4fd1cd4dfbf9e4c85d41fcdfd6 | 2021-05-21T18:38:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/americanpineapp | 302 | null | transformers | 2,964 | ---
language: en
thumbnail: https://www.huggingtweets.com/americanpineapp/1617768265807/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1347029113173798912/ayKe9SJB_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Quilogorath 🤖 AI Bot </div>
<div style="font-size: 15px">@americanpineapp bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@americanpineapp's tweets](https://twitter.com/americanpineapp).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3205 |
| Retweets | 1339 |
| Short tweets | 446 |
| Tweets kept | 1420 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ouupjoy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @americanpineapp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/x8qz0hii) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/x8qz0hii/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/americanpineapp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cooperativa | 789cf4d9bf8ab81c9fdc04384e858385a1879531 | 2021-05-21T23:27:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cooperativa | 302 | null | transformers | 2,965 | ---
language: en
thumbnail: https://www.huggingtweets.com/cooperativa/1604184922075/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1080867330522001408/44pEKx_C_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Cooperativa 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@cooperativa bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@cooperativa's tweets](https://twitter.com/cooperativa).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3234</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>417</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>2</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2815</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/114yjete/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cooperativa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/1vwsyebc) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/1vwsyebc/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/cooperativa'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/deni_is_aflor | 5ff145c561cc1ec2aa44078bf8fa10456158acb1 | 2021-05-22T01:20:35.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/deni_is_aflor | 302 | null | transformers | 2,966 | ---
language: en
thumbnail: https://www.huggingtweets.com/deni_is_aflor/1617777629095/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1378865749582872580/oTZARemq_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dení has returned. 🤖 AI Bot </div>
<div style="font-size: 15px">@deni_is_aflor bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@deni_is_aflor's tweets](https://twitter.com/deni_is_aflor).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3196 |
| Retweets | 1101 |
| Short tweets | 195 |
| Tweets kept | 1900 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22jo6jl8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deni_is_aflor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/l4we4gl2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/l4we4gl2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deni_is_aflor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/humantestkit | dac5fd7e0498738b5129decedfca74d3045d48d1 | 2021-05-22T07:17:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/humantestkit | 302 | null | transformers | 2,967 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1203475963499208706/kzGQ2awX_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">sneaky Pete 🤖 AI Bot </div>
<div style="font-size: 15px">@humantestkit bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@humantestkit's tweets](https://twitter.com/humantestkit).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3204 |
| Retweets | 239 |
| Short tweets | 506 |
| Tweets kept | 2459 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mm8bbeg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @humantestkit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t4jqmz8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t4jqmz8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/humantestkit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/lafrenchfabtalk | 14e09429adcaa3b43db8a007daca84a124b456d1 | 2021-05-22T11:30:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lafrenchfabtalk | 302 | null | transformers | 2,968 | ---
language: en
thumbnail: https://www.huggingtweets.com/lafrenchfabtalk/1606534721070/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1111644417692192770/bFSbn8M3_400x400.png')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Meet La French Fab 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@lafrenchfabtalk bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@lafrenchfabtalk's tweets](https://twitter.com/lafrenchfabtalk).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>325</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>75</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>23</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>227</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2cif6ly5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lafrenchfabtalk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2370zvtn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2370zvtn/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/lafrenchfabtalk'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/loverachelle2 | 63f7307efb7d8ac8cd0dc600937500e827f47dbd | 2022-02-04T17:51:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/loverachelle2 | 302 | null | transformers | 2,969 | ---
language: en
thumbnail: http://www.huggingtweets.com/loverachelle2/1643997109994/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1371211513323749377/ABF4NRhC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LoveRachelle2</div>
<div style="text-align: center; font-size: 14px;">@loverachelle2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LoveRachelle2.
| Data | LoveRachelle2 |
| --- | --- |
| Tweets downloaded | 1440 |
| Retweets | 102 |
| Short tweets | 92 |
| Tweets kept | 1246 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1liqzipo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @loverachelle2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/284b8u8q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/284b8u8q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/loverachelle2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/rgrig | 7a50c01e826dc71ef85aa038002278ece4bfa4ed | 2021-05-22T20:51:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/rgrig | 302 | null | transformers | 2,970 | ---
language: en
thumbnail: https://www.huggingtweets.com/rgrig/1603533197912/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/757884678812659713/Sp-6nUUp_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Radu Grigore 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@rgrig bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@rgrig's tweets](https://twitter.com/rgrig).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3227</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>1072</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>131</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2024</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3j5jr5gc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rgrig's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/ubw0nsbj) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/ubw0nsbj/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/rgrig'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
lordtt13/COVID-SciBERT | 86bef17597444fa3446d37635f18c48fe6d688b0 | 2021-05-19T22:06:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"arxiv:1903.10676",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lordtt13 | null | lordtt13/COVID-SciBERT | 302 | 1 | transformers | 2,971 | ---
language: en
inference: false
---
## COVID-SciBERT: A small language modelling expansion of SciBERT, a BERT model trained on scientific text.
### Details of SciBERT
The **SciBERT** model was presented in [SciBERT: A Pretrained Language Model for Scientific Text](https://arxiv.org/abs/1903.10676) by *Iz Beltagy, Kyle Lo, Arman Cohan* and here is the abstract:
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks.
### Details of the downstream task (Language Modeling) - Dataset 📚
There are actually two datasets that have been used here:
- The original SciBERT model is trained on papers from the corpus of [semanticscholar.org](semanticscholar.org). Corpus size is 1.14M papers, 3.1B tokens. They used the full text of the papers in training, not just abstracts. SciBERT has its own vocabulary (scivocab) that's built to best match the training corpus.
- The expansion is done using the papers present in the [COVID-19 Open Research Dataset Challenge (CORD-19)](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge). Only the abstracts have been used and vocabulary was pruned and added to the existing scivocab. In response to the COVID-19 pandemic, the White House and a coalition of leading research groups have prepared the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 200,000 scholarly articles, including over 100,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This freely available dataset is provided to the global research community to apply recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease. There is a growing urgency for these approaches because of the rapid acceleration in new coronavirus literature, making it difficult for the medical research community to keep up.
### Model training
The training script is present [here](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb).
### Pipelining the Model
```python
import transformers
model = transformers.AutoModelWithLMHead.from_pretrained('lordtt13/COVID-SciBERT')
tokenizer = transformers.AutoTokenizer.from_pretrained('lordtt13/COVID-SciBERT')
nlp_fill = transformers.pipeline('fill-mask', model = model, tokenizer = tokenizer)
nlp_fill('Coronavirus or COVID-19 can be prevented by a' + nlp_fill.tokenizer.mask_token)
# Output:
# [{'sequence': '[CLS] coronavirus or covid - 19 can be prevented by a combination [SEP]',
# 'score': 0.1719885915517807,
# 'token': 2702},
# {'sequence': '[CLS] coronavirus or covid - 19 can be prevented by a simple [SEP]',
# 'score': 0.054218728095293045,
# 'token': 2177},
# {'sequence': '[CLS] coronavirus or covid - 19 can be prevented by a novel [SEP]',
# 'score': 0.043364267796278,
# 'token': 3045},
# {'sequence': '[CLS] coronavirus or covid - 19 can be prevented by a high [SEP]',
# 'score': 0.03732519596815109,
# 'token': 597},
# {'sequence': '[CLS] coronavirus or covid - 19 can be prevented by a vaccine [SEP]',
# 'score': 0.021863549947738647,
# 'token': 7039}]
```
> Created by [Tanmay Thakur](https://github.com/lordtt13) | [LinkedIn](https://www.linkedin.com/in/tanmay-thakur-6bb5a9154/)
> PS: Still looking for more resources to expand my expansion!
|
rinz/DialoGPT-small-Harry-Potterrr | 22be2985ff7653354513e3126f85f09e90e640bf | 2021-11-03T15:42:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rinz | null | rinz/DialoGPT-small-Harry-Potterrr | 302 | null | transformers | 2,972 | ---
tags:
- conversational
---
# Harry Potter model |
savasy/bert-turkish-text-classification | d77f48fc976aaf9d8a06c562cfd6d4b8aa3a97a1 | 2021-05-20T04:56:54.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"tr",
"transformers"
] | text-classification | false | savasy | null | savasy/bert-turkish-text-classification | 302 | 5 | transformers | 2,973 | ---
language: tr
---
# Turkish Text Classification
This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data where there are 7 categories as follows
```
code_to_label={
'LABEL_0': 'dunya ',
'LABEL_1': 'ekonomi ',
'LABEL_2': 'kultur ',
'LABEL_3': 'saglik ',
'LABEL_4': 'siyaset ',
'LABEL_5': 'spor ',
'LABEL_6': 'teknoloji '}
```
## Data
The following Turkish benchmark dataset is used for fine-tuning
https://www.kaggle.com/savasy/ttc4900
## Quick Start
Bewgin with installing transformers as follows
> pip install transformers
```
# Code:
# import libraries
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer, AutoModelForSequenceClassification
tokenizer= AutoTokenizer.from_pretrained("savasy/bert-turkish-text-classification")
# build and load model, it take time depending on your internet connection
model= AutoModelForSequenceClassification.from_pretrained("savasy/bert-turkish-text-classification")
# make pipeline
nlp=pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
# apply model
nlp("bla bla")
# [{'label': 'LABEL_2', 'score': 0.4753005802631378}]
code_to_label={
'LABEL_0': 'dunya ',
'LABEL_1': 'ekonomi ',
'LABEL_2': 'kultur ',
'LABEL_3': 'saglik ',
'LABEL_4': 'siyaset ',
'LABEL_5': 'spor ',
'LABEL_6': 'teknoloji '}
code_to_label[nlp("bla bla")[0]['label']]
# > 'kultur '
```
## How the model was trained
```
## loading data for Turkish text classification
import pandas as pd
# https://www.kaggle.com/savasy/ttc4900
df=pd.read_csv("7allV03.csv")
df.columns=["labels","text"]
df.labels=pd.Categorical(df.labels)
traind_df=...
eval_df=...
# model
from simpletransformers.classification import ClassificationModel
import torch,sklearn
model_args = {
"use_early_stopping": True,
"early_stopping_delta": 0.01,
"early_stopping_metric": "mcc",
"early_stopping_metric_minimize": False,
"early_stopping_patience": 5,
"evaluate_during_training_steps": 1000,
"fp16": False,
"num_train_epochs":3
}
model = ClassificationModel(
"bert",
"dbmdz/bert-base-turkish-cased",
use_cuda=cuda_available,
args=model_args,
num_labels=7
)
model.train_model(train_df, acc=sklearn.metrics.accuracy_score)
```
For other training models please check https://simpletransformers.ai/
For the detailed usage of Turkish Text Classification please check [python notebook](https://github.com/savasy/TurkishTextClassification/blob/master/Bert_base_Text_Classification_for_Turkish.ipynb)
|
ughvom/Ginger | c5eb17aab7b6f1cc28f59e188d8a10fc33640156 | 2022-01-16T14:43:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ughvom | null | ughvom/Ginger | 302 | null | transformers | 2,974 | ---
tags:
- conversational
---
# Ginger DialoGPT Model |
CurtisBowser/DialoGPT-medium-sora | 8c2d77c3a6fac75cd3600c4669a464ae08ab96c9 | 2022-06-04T19:17:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CurtisBowser | null | CurtisBowser/DialoGPT-medium-sora | 301 | null | transformers | 2,975 | ---
tags:
- conversational
---
# Sora DialoGPT Model
|
Shike/DialoGPT_medium_harrypotter | 3fdaf22288960a3d62967af233daf3f266d69b97 | 2021-08-27T14:58:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Shike | null | Shike/DialoGPT_medium_harrypotter | 301 | null | transformers | 2,976 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
colorfulscoop/gpt2-small-ja | f7257d983adc9201edd1e74a3b3e3b9c8e1529ce | 2021-09-27T11:50:17.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"transformers",
"license:cc"
] | text-generation | false | colorfulscoop | null | colorfulscoop/gpt2-small-ja | 301 | null | transformers | 2,977 | ---
language: ja
datasets: wikipedia
widget:
- text: 統計的機械学習でのニューラルネットワーク
license: cc
---
# GPT-2 small Japanese model
This repository contains a GPT2-small model trained on Japanese Wikipedia dataset.
## Training data
[Japanese Wikipedia](https://ja.wikipedia.org/wiki/Wikipedia:データベースダウンロード) dataset as of Aug20, 2021 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) is used for both tokenizer and GPT-2 model.
We splitted the dataset into three subsets - train, valid and test sets. Both tokenizer and model were trained on the train set.
Train set contains around 540M tokens.
## Model description
The model architecture is the same as GPT-2 small model (n_ctx: 1024, n_embd 768, n_head: 12, n_layer: 12) except for a vocabulary size.
The vocabulary size is set to 32,000 instead of an original size of 50,257.
`transformers.GPT2LMHeadModel` is used for training.
## Tokenizer description
[SentencePiece](https://github.com/google/sentencepiece) is used as a tokenizer for this model.
We utilized 1,000,000 sentences from train set.
The vocabulary size was 32,000.
A `add_dummy_prefix` option was set to `True` because Japanese words are not separated by whitespaces.
After training, the tokenizer model was imported as `transformers.BERTGenerationTokenizer`
because it supports SentencePiece models and it does not add any special tokens as default,
which is useful expecially for a text generation task.
## Training
The model was trained on the train set for 30 epochs with batch size 32. Each sample contained 1024 tokens.
We utilized Adam optimizer. Learning rate was linearly increased from `0` to `1e-4` during the first 10,000 steps.
A clip norm was set to `1.0`.
Test set perplexity of the trained model was 29.13.
Please refer to [GitHub](https://github.com/colorfulscoop/gpt-ja) for more training details.
## Usage
First, install dependecies.
```sh
$ pip install transformers==4.10.0 torch==1.8.1 sentencepiece==0.1.96
```
Then use pipeline to generate sentences.
```sh
>>> import transformers
>>> pipeline = transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja")
>>> pipeline("統計的機械学習でのニューラルネットワーク", do_sample=True, top_p=0.95, top_k=50, num_return_sequences=3)
```
**Note:** The default model configuration `config.json` sets parameters for text generation with `do_sample=True`, `top_k=50`, `top_p=0.95`.
Please set these parameters when you need to use different parameters.
## Versions
We recommend to specify `revision` to load the model for reproducibility.
| Revision | Date of Wikipedia dump |
| --- | --- |
| 20210820.1.0 | Aug 20, 2021 |
| 20210301.1.0 | March 1, 2021 |
You can specify `revision` as follows.
```py
# Example of pipeline
>>> transformers.pipeline("text-generation", "colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
# Example of AutoModel
>>> transformers.AutoModel.from_pretrained("colorfulscoop/gpt2-small-ja", revision="20210820.1.0")
```
## License
All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
**Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output.
**Author:** Colorful Scoop
|
google/bert2bert_L-24_wmt_en_de | 72f1b1ab9bac8115da5b6d0176e4b9d80467f4ad | 2020-12-11T21:41:17.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"de",
"dataset:wmt14",
"arxiv:1907.12461",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | google | null | google/bert2bert_L-24_wmt_en_de | 301 | null | transformers | 2,978 | ---
language:
- en
- de
license: apache-2.0
datasets:
- wmt14
tags:
- translation
---
# bert2bert_L-24_wmt_en_de EncoderDecoder model
The model was introduced in
[this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/bert24_en_de/1).
The model is an encoder-decoder model that was initialized on the `bert-large` checkpoints for both the encoder
and decoder and fine-tuned on English to German translation on the WMT dataset, which is linked above.
Disclaimer: The model card has been written by the Hugging Face team.
## How to use
You can use this model for translation, *e.g.*
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/bert2bert_L-24_wmt_en_de", pad_token="<pad>", eos_token="</s>", bos_token="<s>")
model = AutoModelForSeq2SeqLM.from_pretrained("google/bert2bert_L-24_wmt_en_de")
sentence = "Would you like to grab a coffee with me this week?"
input_ids = tokenizer(sentence, return_tensors="pt", add_special_tokens=False).input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Möchten Sie diese Woche einen Kaffee mit mir schnappen?
|
huggingtweets/3thyr3al | 10582a4b810ed4535543800e4aa79b6d9703379a | 2021-05-21T16:37:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/3thyr3al | 301 | null | transformers | 2,979 | ---
language: en
thumbnail: https://www.huggingtweets.com/3thyr3al/1617942034431/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1362160113247793153/VEYzwQTI_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">ethy (3thyreඞl)🏺 🤖 AI Bot </div>
<div style="font-size: 15px">@3thyr3al bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@3thyr3al's tweets](https://twitter.com/3thyr3al).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1727 |
| Retweets | 360 |
| Short tweets | 539 |
| Tweets kept | 828 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tr059nk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @3thyr3al's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m9xvw9pq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m9xvw9pq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/3thyr3al')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/_lukeharris | 788d061bec9cd4cd319fe7166bbd4f3522351acb | 2021-05-21T17:06:07.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/_lukeharris | 301 | null | transformers | 2,980 | ---
language: en
thumbnail: https://www.huggingtweets.com/_lukeharris/1602255697233/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1313937284715212801/sRSBd581_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Luke Harris 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@_lukeharris bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@_lukeharris's tweets](https://twitter.com/_lukeharris).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>1232</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>470</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>102</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>660</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2vhslate/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_lukeharris's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/3ae8jfk6) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/3ae8jfk6/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/_lukeharris'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/cf__bundy | 5c4ccdf6c5285c25130dafa933e8bf1d23ea1fa0 | 2021-07-03T04:06:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/cf__bundy | 301 | null | transformers | 2,981 | ---
language: en
thumbnail: https://www.huggingtweets.com/cf__bundy/1625285188781/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308125167608934400/CHIV0pn3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ty</div>
<div style="text-align: center; font-size: 14px;">@cf__bundy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ty.
| Data | ty |
| --- | --- |
| Tweets downloaded | 1009 |
| Retweets | 117 |
| Short tweets | 200 |
| Tweets kept | 692 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2li311zj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cf__bundy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hxi4q6u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hxi4q6u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cf__bundy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/eduardofep | 2d63d8658dc2be35829aa1eeacc9cecbac208ef8 | 2021-05-22T02:42:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/eduardofep | 301 | null | transformers | 2,982 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1220097421520330754/5EMFQQ01_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">eduardo felipe III 🤖 AI Bot </div>
<div style="font-size: 15px">@eduardofep bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@eduardofep's tweets](https://twitter.com/eduardofep).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 681 |
| Retweets | 22 |
| Short tweets | 84 |
| Tweets kept | 575 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pyky4s3v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eduardofep's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6jtxj206) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6jtxj206/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eduardofep')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ellis_hughes | d3da22b24e091715fc71d85c6e6d6ab667ed9111 | 2021-07-18T18:42:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ellis_hughes | 301 | null | transformers | 2,983 | ---
language: en
thumbnail: https://www.huggingtweets.com/ellis_hughes/1626633732954/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1004536007012651008/ZWJUeJ2W_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ellis Hughes</div>
<div style="text-align: center; font-size: 14px;">@ellis_hughes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ellis Hughes.
| Data | Ellis Hughes |
| --- | --- |
| Tweets downloaded | 2170 |
| Retweets | 396 |
| Short tweets | 91 |
| Tweets kept | 1683 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rqrdlum/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ellis_hughes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3n17xu9k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3n17xu9k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ellis_hughes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/enilox-madacol-ricardocalleja | 23ec6695dcec7d96781b1a3e4bf1434827f89d8a | 2021-05-22T03:11:59.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/enilox-madacol-ricardocalleja | 301 | null | transformers | 2,984 | ---
language: en
thumbnail: https://www.huggingtweets.com/enilox-madacol-ricardocalleja/1620512214792/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1242590778142142466/rLBXvD75_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1195827899/images_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1779482275/131020101290_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ricardo Calleja & Marco D'Agostini & Eliecer Aldana</div>
<div style="text-align: center; font-size: 14px;">@enilox-madacol-ricardocalleja</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ricardo Calleja & Marco D'Agostini & Eliecer Aldana.
| Data | Ricardo Calleja | Marco D'Agostini | Eliecer Aldana |
| --- | --- | --- | --- |
| Tweets downloaded | 396 | 3209 | 884 |
| Retweets | 213 | 1970 | 622 |
| Short tweets | 32 | 244 | 45 |
| Tweets kept | 151 | 995 | 217 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1keiiwwy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @enilox-madacol-ricardocalleja's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hem46kg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hem46kg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/enilox-madacol-ricardocalleja')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/h21k | 8cde657e2cb59e5f1d4a875a03101c32aa267d0c | 2021-05-22T06:24:56.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/h21k | 301 | null | transformers | 2,985 | ---
language: en
thumbnail: https://www.huggingtweets.com/h21k/1602301931118/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/993273677386059777/TngqqZck_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Frank Soboczenski 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@h21k bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@h21k's tweets](https://twitter.com/h21k).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>204</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>176</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3vw58heg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @h21k's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/15xkammd) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/15xkammd/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/h21k'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/johnchildren | ed6fe0df971a49b903f32169876e45fc8571fd88 | 2021-05-22T09:57:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/johnchildren | 301 | null | transformers | 2,986 | ---
language: en
thumbnail: https://www.huggingtweets.com/johnchildren/1616680079652/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1286379285712973825/2fNV7V9s_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">John Children 🤖 AI Bot </div>
<div style="font-size: 15px">@johnchildren bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@johnchildren's tweets](https://twitter.com/johnchildren).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2269 |
| Retweets | 647 |
| Short tweets | 153 |
| Tweets kept | 1469 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3drr7v4j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @johnchildren's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/339vittr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/339vittr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/johnchildren')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/lukashasnoidea | 2509221f9d6d48d24fa0d81f2a2e0bf41aeb8e16 | 2021-05-22T12:47:58.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lukashasnoidea | 301 | null | transformers | 2,987 | ---
language: en
thumbnail: https://www.huggingtweets.com/lukashasnoidea/1614119476128/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1304574909654487040/N5GSg7YD_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">lukas 🏳️🌈 🤖 AI Bot </div>
<div style="font-size: 15px">@lukashasnoidea bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@lukashasnoidea's tweets](https://twitter.com/lukashasnoidea).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1557 |
| Retweets | 829 |
| Short tweets | 132 |
| Tweets kept | 596 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/34q723uy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lukashasnoidea's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2unka64i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2unka64i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lukashasnoidea')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mrmeatscience | 366425befd18240bdc4fcc55302feff3aab25218 | 2021-05-22T15:25:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mrmeatscience | 301 | null | transformers | 2,988 | ---
language: en
thumbnail: https://www.huggingtweets.com/mrmeatscience/1616698328401/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/860937813868654593/pSU21JFl_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Chet Humphries 🤖 AI Bot </div>
<div style="font-size: 15px">@mrmeatscience bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@mrmeatscience's tweets](https://twitter.com/mrmeatscience).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1483 |
| Retweets | 641 |
| Short tweets | 121 |
| Tweets kept | 721 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/301hr630/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mrmeatscience's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b1pd4nz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b1pd4nz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mrmeatscience')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/najmc | aad6d61e77a6d98032479984ddd60816513c2e15 | 2021-05-22T15:43:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/najmc | 301 | null | transformers | 2,989 | ---
language: en
thumbnail: https://www.huggingtweets.com/najmc/1608309975570/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1010829198783602688/SCcQ6M3O_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Najm Clayton 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@najmc bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@najmc's tweets](https://twitter.com/najmc).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3172</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2115</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>170</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>887</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3gva8vjg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @najmc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tp9lbby) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tp9lbby/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/najmc'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/nhlrumorsdaily | 952b5ebc4db67b845085b7ff4439a3bb48cd7f81 | 2021-09-14T23:52:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nhlrumorsdaily | 301 | null | transformers | 2,990 | ---
language: en
thumbnail: https://www.huggingtweets.com/nhlrumorsdaily/1631663556170/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1230668680066891776/NrwCWFUg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NRD</div>
<div style="text-align: center; font-size: 14px;">@nhlrumorsdaily</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NRD.
| Data | NRD |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 282 |
| Short tweets | 576 |
| Tweets kept | 2389 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/362t5kc0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nhlrumorsdaily's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9pxaxgg1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9pxaxgg1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nhlrumorsdaily')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/pastellexists | 8714bd0425191294d631e1b21de54fdeef0c829a | 2021-06-24T00:10:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/pastellexists | 301 | null | transformers | 2,991 | ---
language: en
thumbnail: https://www.huggingtweets.com/pastellexists/1624493429168/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1257778600838926343/wibaaKV6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">pastell</div>
<div style="text-align: center; font-size: 14px;">@pastellexists</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from pastell.
| Data | pastell |
| --- | --- |
| Tweets downloaded | 3210 |
| Retweets | 732 |
| Short tweets | 91 |
| Tweets kept | 2387 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5lqxaa5l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pastellexists's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2y0xb5js) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2y0xb5js/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pastellexists')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/roedeerrootie | 92d1c605942b19921e27cf76528c241f05991883 | 2021-06-23T18:36:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/roedeerrootie | 301 | null | transformers | 2,992 | ---
language: en
thumbnail: https://www.huggingtweets.com/roedeerrootie/1624473381138/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1399885746392092675/_GRuvCla_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rootie</div>
<div style="text-align: center; font-size: 14px;">@roedeerrootie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rootie.
| Data | Rootie |
| --- | --- |
| Tweets downloaded | 3209 |
| Retweets | 902 |
| Short tweets | 317 |
| Tweets kept | 1990 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/p726kemt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @roedeerrootie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2my39bl0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2my39bl0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/roedeerrootie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sinirlasansiz | 6aa7eec364b44d0bd12ce5fe50d265ecaf00aae7 | 2021-05-22T22:58:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/sinirlasansiz | 301 | null | transformers | 2,993 | ---
language: en
thumbnail: https://www.huggingtweets.com/sinirlasansiz/1616940697619/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1186030454572490757/rRH-LcBr_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">BazenFurkan 🤖 AI Bot </div>
<div style="font-size: 15px">@sinirlasansiz bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@sinirlasansiz's tweets](https://twitter.com/sinirlasansiz).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 688 |
| Retweets | 6 |
| Short tweets | 43 |
| Tweets kept | 639 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5js76uys/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sinirlasansiz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pq3jwah) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pq3jwah/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sinirlasansiz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/smokyblue__ | 1c26f22cbe5fe3c71fedcde880455c32666a20a0 | 2021-05-22T23:11:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/smokyblue__ | 301 | null | transformers | 2,994 | ---
language: en
thumbnail: https://www.huggingtweets.com/smokyblue__/1610893224130/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1245434376789397511/8EN5syw3_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Smoky Blue 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@smokyblue__ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@smokyblue__'s tweets](https://twitter.com/smokyblue__).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3019</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>2681</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>88</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>250</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20f3u1ck/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @smokyblue__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eg3neoby) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eg3neoby/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/smokyblue__'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/wherewasmybrain | d790454dfda2ba7037f1c5888c9765ae35e94623 | 2021-05-23T04:23:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/wherewasmybrain | 301 | null | transformers | 2,995 | ---
language: en
thumbnail: https://www.huggingtweets.com/wherewasmybrain/1614466108345/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278021136387903491/UiDVL30Q_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Titled Goose 🤖 AI Bot </div>
<div style="font-size: 15px">@wherewasmybrain bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@wherewasmybrain's tweets](https://twitter.com/wherewasmybrain).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2479 |
| Retweets | 528 |
| Short tweets | 235 |
| Tweets kept | 1716 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23paobou/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wherewasmybrain's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3jxgjfaw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3jxgjfaw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wherewasmybrain')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jsylee/scibert_scivocab_uncased-finetuned-ner | 609e6d9db9010d9a0780de954f23dd5c2fb0ed25 | 2021-11-22T03:52:41.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:ade_corpus_v2",
"transformers",
"Named Entity Recognition",
"SciBERT",
"Adverse Effect",
"Drug",
"Medical",
"autotrain_compatible"
] | token-classification | false | jsylee | null | jsylee/scibert_scivocab_uncased-finetuned-ner | 301 | 3 | transformers | 2,996 | ---
language:
- en
tags:
- Named Entity Recognition
- SciBERT
- Adverse Effect
- Drug
- Medical
datasets:
- ade_corpus_v2
widget:
- text: "Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug."
example_title: "Abortion, miscarriage, ..."
- text: "Addiction to many sedatives and analgesics, such as diazepam, morphine, etc."
example_title: "Addiction to many..."
- text: "Birth defects associated with thalidomide"
example_title: "Birth defects associated..."
- text: "Bleeding of the intestine associated with aspirin therapy"
example_title: "Bleeding of the intestine..."
- text: "Cardiovascular disease associated with COX-2 inhibitors (i.e. Vioxx)"
example_title: "Cardiovascular disease..."
---
This is a SciBERT-based model fine-tuned to perform Named Entity Recognition for drug names and adverse drug effects.

This model classifies input tokens into one of five classes:
- `B-DRUG`: beginning of a drug entity
- `I-DRUG`: within a drug entity
- `B-EFFECT`: beginning of an AE entity
- `I-EFFECT`: within an AE entity
- `O`: outside either of the above entities
To get started using this model for inference, simply set up an NER `pipeline` like below:
```python
from transformers import (AutoModelForTokenClassification,
AutoTokenizer,
pipeline,
)
model_checkpoint = "jsylee/scibert_scivocab_uncased-finetuned-ner"
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=5,
id2label={0: 'O', 1: 'B-DRUG', 2: 'I-DRUG', 3: 'B-EFFECT', 4: 'I-EFFECT'}
)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model_pipeline = pipeline(task="ner", model=model, tokenizer=tokenizer)
print( model_pipeline ("Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug."))
```
SciBERT: https://huggingface.co/allenai/scibert_scivocab_uncased
Dataset: https://huggingface.co/datasets/ade_corpus_v2
|
yangheng/deberta-v3-base-absa-v1.1 | e7440e977994d4b49f3af408b2fe00a63db025ad | 2022-03-19T00:31:47.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:laptop14",
"dataset:restaurant14",
"dataset:restaurant16",
"dataset:ACL-Twitter",
"dataset:MAMS",
"dataset:Television",
"dataset:TShirt",
"dataset:Yelp",
"arxiv:2110.08604",
"transformers",
"aspect-based-sentiment-analysis",
"PyABSA",
"license:mit"
] | text-classification | false | yangheng | null | yangheng/deberta-v3-base-absa-v1.1 | 301 | null | transformers | 2,997 |
---
language:
- en
tags:
- aspect-based-sentiment-analysis
- PyABSA
license: mit
datasets:
- laptop14
- restaurant14
- restaurant16
- ACL-Twitter
- MAMS
- Television
- TShirt
- Yelp
metrics:
- accuracy
- macro-f1
widget:
- text: "[CLS] when tables opened up, the manager sat another party before us. [SEP] manager [SEP] "
---
# Note
This model is training with 30k+ ABSA samples, see [ABSADatasets](https://github.com/yangheng95/ABSADatasets). Yet the test sets are not included in pre-training, so you can use this model for training and benchmarking on common ABSA datasets, e.g., Laptop14, Rest14 datasets. (Except for the Rest15 dataset!)
# DeBERTa for aspect-based sentiment analysis
The `deberta-v3-base-absa` model for aspect-based sentiment analysis, trained with English datasets from [ABSADatasets](https://github.com/yangheng95/ABSADatasets).
## Training Model
This model is trained based on the FAST-LCF-BERT model with `microsoft/deberta-v3-base`, which comes from [PyABSA](https://github.com/yangheng95/PyABSA).
To track state-of-the-art models, please see [PyASBA](https://github.com/yangheng95/PyABSA).
## Usage
```python3
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
model = AutoModelForSequenceClassification.from_pretrained("yangheng/deberta-v3-base-absa-v1.1")
```
## Example in PyASBA
An [example](https://github.com/yangheng95/PyABSA/blob/release/demos/aspect_polarity_classification/train_apc_multilingual.py) for using FAST-LCF-BERT in PyASBA datasets.
## Datasets
This model is fine-tuned with 180k examples for the ABSA dataset (including augmented data). Training dataset files:
```
loading: integrated_datasets/apc_datasets/SemEval/laptop14/Laptops_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/restaurant14/Restaurants_Train.xml.seg
loading: integrated_datasets/apc_datasets/SemEval/restaurant16/restaurant_train.raw
loading: integrated_datasets/apc_datasets/ACL_Twitter/acl-14-short-data/train.raw
loading: integrated_datasets/apc_datasets/MAMS/train.xml.dat
loading: integrated_datasets/apc_datasets/Television/Television_Train.xml.seg
loading: integrated_datasets/apc_datasets/TShirt/Menstshirt_Train.xml.seg
loading: integrated_datasets/apc_datasets/Yelp/yelp.train.txt
```
If you use this model in your research, please cite our paper:
```
@article{YangZMT21,
author = {Heng Yang and
Biqing Zeng and
Mayi Xu and
Tianxing Wang},
title = {Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable
Sentiment Dependency Learning},
journal = {CoRR},
volume = {abs/2110.08604},
year = {2021},
url = {https://arxiv.org/abs/2110.08604},
eprinttype = {arXiv},
eprint = {2110.08604},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-08604.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
rachelcorey/DialoGPT-medium-kramer | 351730b4fe3dd81ff5962671f81873ceb3e1888a | 2022-01-04T13:59:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rachelcorey | null | rachelcorey/DialoGPT-medium-kramer | 300 | null | transformers | 2,998 | ---
tags:
- conversational
---
# a chatbot based on Cosmo Kramer |
mrm8488/spanish-TinyBERT-betito | 37c59c93b730e4e1a0ee0a02d6fe2775e903aaf1 | 2022-03-07T15:37:36.000Z | [
"pytorch",
"bert",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"tinybert"
] | null | false | mrm8488 | null | mrm8488/spanish-TinyBERT-betito | 300 | null | transformers | 2,999 | ---
language:
- es
tags:
- spanish
- tinybert
datasets:
- large_spanish_corpus
---
# BETito (Spanish TinyBERT for BETO) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.