modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
huggingartists/mating-ritual
|
huggingartists
| 2021-09-18T01:00:55Z
| 9
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/mating-ritual",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/mating-ritual
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2a5b556758315c192c7b1e6e86634c7d.600x600x1.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mating Ritual</div>
<a href="https://genius.com/artists/mating-ritual">
<div style="text-align: center; font-size: 14px;">@mating-ritual</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Mating Ritual.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/mating-ritual).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/mating-ritual")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3cljintu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Mating Ritual's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/dv1g3x3b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/dv1g3x3b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/mating-ritual')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/mating-ritual")
model = AutoModelWithLMHead.from_pretrained("huggingartists/mating-ritual")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
muirkat/tolkien-mythopoeic-gen
|
muirkat
| 2021-09-17T21:28:53Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: tolkien-mythopoeic-gen
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tolkien-mythopoeic-gen
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on Tolkien's mythopoeic works, namely The Silmarillion and Unfinished Tales of Numenor and Middle Earth.
It achieves the following results on the evaluation set:
- Loss: 3.5110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5732 | 1.0 | 145 | 3.5110 |
| 3.5713 | 2.0 | 290 | 3.5110 |
| 3.5718 | 3.0 | 435 | 3.5110 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
huggingartists/green-day
|
huggingartists
| 2021-09-17T07:15:56Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/green-day",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/green-day
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d7d8da365bad13b7bd5cc89117b697eb.640x640x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Green Day</div>
<a href="https://genius.com/artists/green-day">
<div style="text-align: center; font-size: 14px;">@green-day</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Green Day.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/green-day).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/green-day")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/22eap04b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Green Day's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/183da0m9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/183da0m9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/green-day')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/green-day")
model = AutoModelWithLMHead.from_pretrained("huggingartists/green-day")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
linydub/bart-large-samsum
|
linydub
| 2021-09-17T00:55:29Z
| 273
| 16
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"azureml",
"azure",
"codecarbon",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z
|
---
language:
- en
license: apache-2.0
tags:
- summarization
- azureml
- azure
- codecarbon
- bart
datasets:
- samsum
metrics:
- rouge
model-index:
- name: bart-large-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization"
type: samsum
metrics:
- name: Validation ROGUE-1
type: rouge-1
value: 55.0234
- name: Validation ROGUE-2
type: rouge-2
value: 29.6005
- name: Validation ROGUE-L
type: rouge-L
value: 44.914
- name: Validation ROGUE-Lsum
type: rouge-Lsum
value: 50.464
- name: Test ROGUE-1
type: rouge-1
value: 53.4345
- name: Test ROGUE-2
type: rouge-2
value: 28.7445
- name: Test ROGUE-L
type: rouge-L
value: 44.1848
- name: Test ROGUE-Lsum
type: rouge-Lsum
value: 49.1874
widget:
- text: |
Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet?
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend.
Henry: Nice, I'm really looking forward to seeing them again.
---
## `bart-large-samsum`
This model was trained using Microsoft's [`Azure Machine Learning Service`](https://azure.microsoft.com/en-us/services/machine-learning). It was fine-tuned on the [`samsum`](https://huggingface.co/datasets/samsum) corpus from [`facebook/bart-large`](https://huggingface.co/facebook/bart-large) checkpoint.
## Usage (Inference)
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="linydub/bart-large-samsum")
input_text = '''
Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet?
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend.
Henry: Nice, I'm really looking forward to seeing them again.
'''
summarizer(input_text)
```
## Fine-tune on AzureML
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Flinydub%2Fazureml-greenai-txtsum%2Fmain%2F.cloud%2Ftemplate-hub%2Flinydub%2Farm-bart-large-samsum.json) [](http://armviz.io/#/?load=https://raw.githubusercontent.com/linydub/azureml-greenai-txtsum/main/.cloud/template-hub/linydub/arm-bart-large-samsum.json)
More information about the fine-tuning process (including samples and benchmarks):
**[Preview]** https://github.com/linydub/azureml-greenai-txtsum
## Resource Usage
These results were retrieved from [`Azure Monitor Metrics`](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics). All experiments were ran on AzureML low priority compute clusters.
| Key | Value |
| --- | ----- |
| Region | US West 2 |
| AzureML Compute SKU | STANDARD_ND40RS_V2 |
| Compute SKU GPU Device | 8 x NVIDIA V100 32GB (NVLink) |
| Compute Node Count | 1 |
| Run Duration | 6m 48s |
| Compute Cost (Dedicated/LowPriority) | $2.50 / $0.50 USD |
| Average CPU Utilization | 47.9% |
| Average GPU Utilization | 69.8% |
| Average GPU Memory Usage | 25.71 GB |
| Total GPU Energy Usage | 370.84 kJ |
*Compute cost ($) is estimated from the run duration, number of compute nodes utilized, and SKU's price per hour. Updated SKU pricing could be found [here](https://azure.microsoft.com/en-us/pricing/details/machine-learning).
### Carbon Emissions
These results were obtained using [`CodeCarbon`](https://github.com/mlco2/codecarbon). The carbon emissions are estimated from training runtime only (excl. setup and evaluation runtimes).
| Key | Value |
| --- | ----- |
| timestamp | 2021-09-16T23:54:25 |
| duration | 263.2430217266083 |
| emissions | 0.029715544634717518 |
| energy_consumed | 0.09985062041235725 |
| country_name | USA |
| region | Washington |
| cloud_provider | azure |
| cloud_region | westus2 |
## Hyperparameters
- max_source_length: 512
- max_target_length: 90
- fp16: True
- seed: 1
- per_device_train_batch_size: 16
- per_device_eval_batch_size: 16
- gradient_accumulation_steps: 1
- learning_rate: 5e-5
- num_train_epochs: 3.0
- weight_decay: 0.1
## Results
| ROUGE | Score |
| ----- | ----- |
| eval_rouge1 | 55.0234 |
| eval_rouge2 | 29.6005 |
| eval_rougeL | 44.914 |
| eval_rougeLsum | 50.464 |
| predict_rouge1 | 53.4345 |
| predict_rouge2 | 28.7445 |
| predict_rougeL | 44.1848 |
| predict_rougeLsum | 49.1874 |
| Metric | Value |
| ------ | ----- |
| epoch | 3.0 |
| eval_gen_len | 30.6027 |
| eval_loss | 1.4327096939086914 |
| eval_runtime | 22.9127 |
| eval_samples | 818 |
| eval_samples_per_second | 35.701 |
| eval_steps_per_second | 0.306 |
| predict_gen_len | 30.4835 |
| predict_loss | 1.4501988887786865 |
| predict_runtime | 26.0269 |
| predict_samples | 819 |
| predict_samples_per_second | 31.467 |
| predict_steps_per_second | 0.269 |
| train_loss | 1.2014821151207233 |
| train_runtime | 263.3678 |
| train_samples | 14732 |
| train_samples_per_second | 167.811 |
| train_steps_per_second | 1.321 |
| total_steps | 348 |
| total_flops | 4.26008990669865e+16 |
|
huggingtweets/naisu9k
|
huggingtweets
| 2021-09-16T23:32:02Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/naisu9k/1631835118534/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433277192612556801/MIcVKoh3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">naisu</div>
<div style="text-align: center; font-size: 14px;">@naisu9k</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from naisu.
| Data | naisu |
| --- | --- |
| Tweets downloaded | 3211 |
| Retweets | 1023 |
| Short tweets | 424 |
| Tweets kept | 1764 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/84y08bav/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @naisu9k's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jtnpwkvw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jtnpwkvw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/naisu9k')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingartists/big-baby-tape
|
huggingartists
| 2021-09-16T21:25:31Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/big-baby-tape",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/big-baby-tape
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d3fc4853f74c35383ec68670bbd292eb.709x709x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Big Baby Tape</div>
<a href="https://genius.com/artists/big-baby-tape">
<div style="text-align: center; font-size: 14px;">@big-baby-tape</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Big Baby Tape.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/big-baby-tape).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/big-baby-tape")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1mu9ki6z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Big Baby Tape's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/30qklxvh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/30qklxvh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/big-baby-tape')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/big-baby-tape")
model = AutoModelWithLMHead.from_pretrained("huggingartists/big-baby-tape")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/arash
|
huggingartists
| 2021-09-16T15:18:43Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/arash",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/arash
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/de78420433126e9e426443d10bf22edf.600x600x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Arash</div>
<a href="https://genius.com/artists/arash">
<div style="text-align: center; font-size: 14px;">@arash</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Arash.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/arash).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/arash")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/27u6df87/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Arash's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3eav8xpf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3eav8xpf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/arash')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/arash")
model = AutoModelWithLMHead.from_pretrained("huggingartists/arash")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/ravisankar_g
|
huggingtweets
| 2021-09-16T11:00:51Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/ravisankar_g/1631789963281/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/500484562489577472/qjf8YuI9_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ravi Sankar Guntur</div>
<div style="text-align: center; font-size: 14px;">@ravisankar_g</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ravi Sankar Guntur.
| Data | Ravi Sankar Guntur |
| --- | --- |
| Tweets downloaded | 1844 |
| Retweets | 785 |
| Short tweets | 183 |
| Tweets kept | 876 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11bn0l7s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ravisankar_g's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5llpcz6a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5llpcz6a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ravisankar_g')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
model-attribution-challenge/xlnet-base-cased
|
model-attribution-challenge
| 2021-09-16T09:43:58Z
| 10
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"xlnet",
"text-generation",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1906.08237",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-08T21:53:14Z
|
---
language: en
license: mit
datasets:
- bookcorpus
- wikipedia
---
# XLNet (base-sized model)
XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/).
Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
## Intended uses & limitations
The model is mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlnet) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import XLNetTokenizer, XLNetModel
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetModel.from_pretrained('xlnet-base-cased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1906-08237,
author = {Zhilin Yang and
Zihang Dai and
Yiming Yang and
Jaime G. Carbonell and
Ruslan Salakhutdinov and
Quoc V. Le},
title = {XLNet: Generalized Autoregressive Pretraining for Language Understanding},
journal = {CoRR},
volume = {abs/1906.08237},
year = {2019},
url = {http://arxiv.org/abs/1906.08237},
eprinttype = {arXiv},
eprint = {1906.08237},
timestamp = {Mon, 24 Jun 2019 17:28:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1906-08237.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
raruidol/GameANchess
|
raruidol
| 2021-09-16T08:53:50Z
| 5
| 1
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
Algebraic Notation model of sequences of moves of complete chess games.
|
danny911kr/calm-mix-large
|
danny911kr
| 2021-09-16T07:23:19Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
```
|
danny911kr/calm-large
|
danny911kr
| 2021-09-16T07:16:53Z
| 10
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
```
|
chrommium/rubert-base-cased-sentence-finetuned-headlines_X
|
chrommium
| 2021-09-16T00:34:06Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rubert-base-cased-sentence-finetuned-headlines_X
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-headlines_X
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2535
- Accuracy: 0.952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 157 | 0.2759 | 0.912 |
| No log | 2.0 | 314 | 0.2538 | 0.936 |
| No log | 3.0 | 471 | 0.2556 | 0.945 |
| 0.1908 | 4.0 | 628 | 0.2601 | 0.95 |
| 0.1908 | 5.0 | 785 | 0.2535 | 0.952 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
huggingartists/ot-rus
|
huggingartists
| 2021-09-15T17:29:45Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/ot-rus",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/ot-rus
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5b2286f88533601eda462ce44dd2ee56.776x776x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">O.T (RUS)</div>
<a href="https://genius.com/artists/ot-rus">
<div style="text-align: center; font-size: 14px;">@ot-rus</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from O.T (RUS).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/ot-rus).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/ot-rus")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/35byet4r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on O.T (RUS)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2p2tawej) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2p2tawej/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/ot-rus')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/ot-rus")
model = AutoModelWithLMHead.from_pretrained("huggingartists/ot-rus")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/gunna
|
huggingartists
| 2021-09-15T17:15:43Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/gunna",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/gunna
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/18e3833ac527a4bf14ddf2acef834795.640x640x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gunna</div>
<a href="https://genius.com/artists/gunna">
<div style="text-align: center; font-size: 14px;">@gunna</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Gunna.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/gunna).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/gunna")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/vcyblers/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Gunna's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3c1xymw6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3c1xymw6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/gunna')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/gunna")
model = AutoModelWithLMHead.from_pretrained("huggingartists/gunna")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/big-russian-boss
|
huggingartists
| 2021-09-15T16:41:55Z
| 15
| 1
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/big-russian-boss",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/big-russian-boss
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d66eeeef006738708df1e52b84c34c14.403x403x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Big Russian Boss</div>
<a href="https://genius.com/artists/big-russian-boss">
<div style="text-align: center; font-size: 14px;">@big-russian-boss</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Big Russian Boss.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/big-russian-boss).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/big-russian-boss")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ju9bqqi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Big Russian Boss's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3820n7qx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3820n7qx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/big-russian-boss')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/big-russian-boss")
model = AutoModelWithLMHead.from_pretrained("huggingartists/big-russian-boss")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
ziedsb19/tunbert_zied
|
ziedsb19
| 2021-09-15T14:04:41Z
| 8
| 2
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
## 🧐 About <a name = "about"></a>
tunbert_zied is language model for the tunisian dialect based on a similar architecture to the RoBERTa model created BY zied sbabti.
The model was trained for over 600 000 phrases written in the tunisian dialect.
## 🏁 Getting Started <a name = "getting_started"></a>
Load <strong>tunbert_zied</strong> and its sub-word tokenizer
Don'use the <em>AutoTokenizer.from_pretrained(...)</em> method to load the tokenizer, instead use <em>BertTokeinzer.from_pretrained(...)</em> method. (this is because I haven't use the bultin tokenizer of roberta model which is the GPT tokenizer, instead i have used BertTokenizer)
### Example
```
import transformers as tr
tokenizer = tr.BertTokenizer.from_pretrained("ziedsb19/tunbert_zied")
model = tr.AutoModelForMaskedLM.from_pretrained("ziedsb19/tunbert_zied")
pipeline = tr.pipeline("fill-mask", model= model, tokenizer=tokenizer)
#test the model by masking a word in a phrase with [MASK]
pipeline("Ahla winek [MASK] lioum ?")
#results
"""
[{'sequence': 'ahla winek cv lioum?',
'score': 0.07968682795763016,
'token': 869,
'token_str': 'c v'},
{'sequence': 'ahla winek enty lioum?',
'score': 0.06116843968629837,
'token': 448,
'token_str': 'e n t y'},
{'sequence': 'ahla winek ch3amla lioum?',
'score': 0.057379286736249924,
'token': 7342,
'token_str': 'c h 3 a m l a'},
{'sequence': 'ahla winek cha3malt lioum?',
'score': 0.028112901374697685,
'token': 4663,
'token_str': 'c h a 3 m a l t'},
{'sequence': 'ahla winek enti lioum?',
'score': 0.025781650096178055,
'token': 436,
'token_str': 'e n t i'}]
"""
```
## ✍️ Authors <a name = "authors"></a>
- [zied sbabti](https://www.linkedin.com/in/zied-sbabti-a58a56139) - Idea & Initial work
|
huggingartists/lil-baby
|
huggingartists
| 2021-09-15T13:17:41Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/lil-baby",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/lil-baby
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/98367f3cd4548347b114452eb3a5927f.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lil Baby</div>
<a href="https://genius.com/artists/lil-baby">
<div style="text-align: center; font-size: 14px;">@lil-baby</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Lil Baby.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/lil-baby).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/lil-baby")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/vueaothh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lil Baby's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/257bod1h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/257bod1h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/lil-baby')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/lil-baby")
model = AutoModelWithLMHead.from_pretrained("huggingartists/lil-baby")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/till-lindemann
|
huggingartists
| 2021-09-15T11:46:53Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/till-lindemann",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/till-lindemann
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/48d6ca7ca17a9dfc9ad3034e71533a89.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Till Lindemann</div>
<a href="https://genius.com/artists/till-lindemann">
<div style="text-align: center; font-size: 14px;">@till-lindemann</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Till Lindemann.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/till-lindemann).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/till-lindemann")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2xh6fyqt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Till Lindemann's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/32ohf092) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/32ohf092/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/till-lindemann')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/till-lindemann")
model = AutoModelWithLMHead.from_pretrained("huggingartists/till-lindemann")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
|
blizrys
| 2021-09-15T04:08:24Z
| 11
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.72
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8396 | 0.58 |
| No log | 2.0 | 114 | 0.8608 | 0.58 |
| No log | 3.0 | 171 | 0.7642 | 0.68 |
| No log | 4.0 | 228 | 0.8196 | 0.64 |
| No log | 5.0 | 285 | 0.6477 | 0.72 |
| No log | 6.0 | 342 | 0.6861 | 0.72 |
| No log | 7.0 | 399 | 0.6735 | 0.74 |
| No log | 8.0 | 456 | 0.6516 | 0.72 |
| 0.6526 | 9.0 | 513 | 0.6707 | 0.72 |
| 0.6526 | 10.0 | 570 | 0.6748 | 0.72 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
huggingtweets/lilnasx
|
huggingtweets
| 2021-09-14T23:54:26Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/lilnasx/1631663662799/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1430901239110258696/1P0QZ5_7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MONTERO 🦋</div>
<div style="text-align: center; font-size: 14px;">@lilnasx</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MONTERO 🦋.
| Data | MONTERO 🦋 |
| --- | --- |
| Tweets downloaded | 3169 |
| Retweets | 883 |
| Short tweets | 796 |
| Tweets kept | 1490 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/z4oke017/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lilnasx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3flqsl4t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3flqsl4t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lilnasx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/foodnetwork
|
huggingtweets
| 2021-09-14T23:41:32Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/foodnetwork/1631662887881/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1395089186538115072/oehHqb54_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Food Network</div>
<div style="text-align: center; font-size: 14px;">@foodnetwork</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Food Network.
| Data | Food Network |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 938 |
| Short tweets | 49 |
| Tweets kept | 2250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2x1lok4q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @foodnetwork's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yjxdjcm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yjxdjcm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/foodnetwork')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/theofficetv
|
huggingtweets
| 2021-09-14T23:33:05Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/theofficetv/1631662381899/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1397240738493001729/Unk8D_yT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Office on Peacock</div>
<div style="text-align: center; font-size: 14px;">@theofficetv</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Office on Peacock.
| Data | The Office on Peacock |
| --- | --- |
| Tweets downloaded | 3215 |
| Retweets | 459 |
| Short tweets | 592 |
| Tweets kept | 2164 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dwxnzp9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theofficetv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1mnr0e28) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1mnr0e28/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theofficetv')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
flexudy/t5-base-conceptor
|
flexudy
| 2021-09-14T23:27:35Z
| 6
| 4
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
# Towards Neuro-Symbolic Language Understanding

At [Flexudy](https://flexudy.com), we look for ways to unify symbolic and sub-symbolic methods to improve model interpretation and inference.
## Problem
1. Word embeddings are awesome 🚀. However, no one really knows what an array of 768 numbers means?
2. Text/Token classification is also awesome ❤️. Still, classifying things into a finite set of concepts is rather limited.
3. Last but not least, how do I know that the word *cat* is a **mammal** and also an **animal** if my neural network is only trained to predict whether something is an animal or not?
## Solution
1. It would be cool if my neural network would just know that **cat** is an **animal** right? *∀x.Cat(x) ⇒ Animal(x)*.
Or for example, (*∀x.SchöneBlumen(x) ⇒ Blumen(x)*) -- English meaning: For all x, If x is a beautiful flower, then x is still a flower. --
2. All of a sudden, tasks like **Question Answering**, **Summarization**, **Named Entity Recognition** or even **Intent Classification** etc become easier right?
Well, one might probably still need time to build a good and robust solution that is not as large as **GPT3**.
Like [Peter Gärdenfors, author of conceptual spaces](https://www.goodreads.com/book/show/1877443.Conceptual_Spaces), we are trying to find ways to navigate between the symbolic and the sub-symbolic by thinking in concepts.
Should such a solution exist, one could easily leverage true logical reasoning engines on natural language.
How awesome would that be? 💡
## Flexudy's Conceptor
1. We developed a poor man's implementation of the ideal solution described above.
2. Though it is a poor man's model, **it is still a useful one** 🤗.
### Usage
No library should anyone suffer. Especially not if it is built on top of 🤗 **HF Transformers**.
Go to the [Github repo](https://github.com/flexudy/natural-language-logic)
`pip install git+https://github.com/flexudy/[email protected]`
```python
from flexudy.conceptor.start import FlexudyConceptInferenceMachineFactory
# Load me only once
concept_inference_machine = FlexudyConceptInferenceMachineFactory.get_concept_inference_machine()
# A list of terms.
terms = ["cat", "dog", "economics and sociology", "public company"]
# If you don't pass the language, a language detector will attempt to predict it for you
# If any error occurs, the language defaults to English.
language = "en"
# Predict concepts
# You can also pass the batch_size=2 and the beam_size=4
concepts = concept_inference_machine.infer_concepts(terms, language=language)
```
Output:
```python
{'cat': ['mammal', 'animal'], 'dog': ['hound', 'animal'], 'economics and sociology': ['both fields of study'], 'public company': ['company']}
```
### How was it trained?
1. Using Google's T5-base and T5-small. Both models are released on the Hugging Face Hub.
2. T5-base was trained for only two epochs while T5-small was trained for 5 epochs.
## Where did you get the data?
1. I extracted and curated a fragment of [Conceptnet](https://conceptnet.io/)
2. In particular, only the IsA relation was used.
3. Note that one term can belong to multiple concepts (which is pretty cool if you think about [Fuzzy Description Logics](https://lat.inf.tu-dresden.de/~stefborg/Talks/QuantLAWorkshop2013.pdf)).
Multiple inheritances however mean some terms belong to so many concepts. Hence, I decided to randomly throw away some due to the **maximum length limitation**.
### Setup
1. I finally allowed only `2` to `4` concepts at random for each term. This means, there is still great potential to make the models generalise better 🚀.
3. I used a total of `279884` training examples and `1260` for testing. Edges -- i.e `IsA(concept u, concept v)` -- in both sets are disjoint.
4. Trained for `15K` steps with learning rate linear decay during each step. Starting at `0.001`
5. Used `RAdam Optimiser` with weight_decay =`0.01` and batch_size =`36`.
6. Source and target max length were both `64`.
### Multilingual Models
1. The "conceptor" model is multilingual. English, German and French is supported.
2. [Conceptnet](https://conceptnet.io/) supports many languages, but I just chose those three because those are the ones I speak.
### Metrics for flexudy-conceptor-t5-base
| Metric | Score |
| ------------- |:-------------:|
| Exact Match | 36.67 |
| F1 | 43.08 |
| Loss smooth | 1.214 |
Unfortunately, we no longer have the metrics for flexudy-conceptor-t5-small. If I recall correctly, base was just slightly better on the test set (ca. `2%` F1).
## Why not just use the data if you have it structured already?
Conceptnet is very large. Even if you just consider loading a fragment into your RAM, say with only 100K edges, this is still a large graph.
Especially, if you think about how you will save the node embeddings efficiently for querying.
If you prefer this approach, [Milvus](https://github.com/milvus-io/pymilvus) can be of great help.
You can compute query embeddings and try to find the best match. From there (after matching), you can navigate through the graph at `100%` precision.
|
bigscience/tr1-13B-codecarbon
|
bigscience
| 2021-09-14T21:38:31Z
| 0
| 1
| null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z
|
CodeCarbon wasn't ready until the training was over so we only did an additional 10h run to measure with and then we can extrapolate to the whole training.
This set of records captures the startup time and 2499 iterations in 2 records per gpu, since there was also an intermediary checkpoint saved half-way and we flush the CC
records on each checkpoint saving.
The training had 168000 iterations. Therefore multiply the reported data by 67. This would be quite approximate since we were using 16 nodes when doing
the ramp up, then 64 and only the last 3 weeks 128 nodes.
Caveat emptor: I'm not sure whether CC-reports overlap since each report is per gpu and I think they may be measuring the same thing, other than the gpu itself.
So this requires research.
Each csv file contains a report for a single gpu.
|
huggingtweets/imrobertyi
|
huggingtweets
| 2021-09-14T20:51:55Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/imrobertyi/1631652694998/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1319291252718759938/q2NdOiAb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Yi 🐳</div>
<div style="text-align: center; font-size: 14px;">@imrobertyi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Yi 🐳.
| Data | Robert Yi 🐳 |
| --- | --- |
| Tweets downloaded | 1353 |
| Retweets | 61 |
| Short tweets | 130 |
| Tweets kept | 1162 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cmckdcz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imrobertyi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fi24mvdb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fi24mvdb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imrobertyi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
macedonizer/gr-gpt2
|
macedonizer
| 2021-09-14T16:07:35Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gr",
"dataset:wiki-gr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language:
- gr
thumbnail: https://huggingface.co/macedonizer/gr-roberta-base/lets-talk-about-nlp-gr.jpg
license: apache-2.0
datasets:
- wiki-gr
---
# gr-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
gr-gpt2 is a transformers model pretrained on a very large corpus of Greek data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of the continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of a word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Greek language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for, however, which is generating texts from a
prompt.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/gr-gpt2') \\nnmodel = AutoModelWithLMHead.from_pretrained('macedonizer/gr-gpt2')
input_text = 'Η Αθήνα είναι'
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output)
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
|
CAMeL-Lab
| 2021-09-14T14:34:06Z
| 7
| 2
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-half** (`bert-base-arabic-camelbert-msa-half`), a model pre-trained on a half of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
|✔|`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.09132730215787888,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
'score': 0.08282623440027237,
'token': 18,
'token_str': '.'},
{'sequence': '[CLS] الهدف من الحياة هو البقاء. [SEP]',
'score': 0.04031957685947418,
'token': 9331,
'token_str': 'البقاء'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.032019514590501785,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.028731243684887886,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-half')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa
|
CAMeL-Lab
| 2021-09-14T14:33:41Z
| 3,351
| 9
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA** (`bert-base-arabic-camelbert-msa`), a model pre-trained on the entire MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
|✔|`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.08507660031318665,
'token': 2854,
'token_str': 'العمل'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.058905381709337234,
'token': 3696, 'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.04660581797361374, 'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]',
'score': 0.04156001657247543,
'token': 12413, 'token_str': 'الربح'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.03534102067351341,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
|
CAMeL-Lab
| 2021-09-14T14:29:52Z
| 24
| 2
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA-eighth** (`bert-base-arabic-camelbert-msa-eighth`), a model pre-trained on an eighth of the full MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
|✔|`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.057812128216028214,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.05573025345802307,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الكمال. [SEP]',
'score': 0.035942986607551575,
'token': 17188,
'token_str': 'الكمال'},
{'sequence': '[CLS] الهدف من الحياة هو التعلم. [SEP]',
'score': 0.03375256434082985,
'token': 12554,
'token_str': 'التعلم'},
{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.030303971841931343,
'token': 2854,
'token_str': 'العمل'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da
|
CAMeL-Lab
| 2021-09-14T14:29:21Z
| 1,130
| 28
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-DA** (`bert-base-arabic-camelbert-da`), a model pre-trained on the DA (dialectal Arabic) dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
|✔|`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-da')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
'score': 0.062508225440979,
'token': 18,
'token_str': '.'},
{'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]',
'score': 0.033172328025102615,
'token': 4295,
'token_str': 'الموت'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.029575437307357788,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو الرحيل. [SEP]',
'score': 0.02724040113389492,
'token': 11449,
'token_str': 'الرحيل'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.01564178802073002,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- DA (dialectal Arabic)
- A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678).
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
lewtun/perceriver-test-01
|
lewtun
| 2021-09-14T14:07:26Z
| 2
| 0
|
transformers
|
[
"transformers",
"pytorch",
"satflow",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
---
license: mit
tags:
- satflow
- forecasting
- timeseries
- remote-sensing
---
# Perceiver
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
lewtun/litmetnet-test-01
|
lewtun
| 2021-09-14T10:04:03Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"satflow",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
---
license: mit
tags:
- satflow
- forecasting
- timeseries
- remote-sensing
---
# LitMetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
mdhugol/indonesia-bert-sentiment-classification
|
mdhugol
| 2021-09-14T08:24:28Z
| 11,943
| 21
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
Indonesian BERT Base Sentiment Classifier is a sentiment-text-classification model. The model was originally the pre-trained [IndoBERT Base Model (phase1 - uncased)](https://huggingface.co/indobenchmark/indobert-base-p1) model using [Prosa sentiment dataset](https://github.com/indobenchmark/indonlu/tree/master/dataset/smsa_doc-sentiment-prosa)
## How to Use
### As Text Classifier
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained= "mdhugol/indonesia-bert-sentiment-classification"
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
tokenizer = AutoTokenizer.from_pretrained(pretrained)
sentiment_analysis = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
label_index = {'LABEL_0': 'positive', 'LABEL_1': 'neutral', 'LABEL_2': 'negative'}
pos_text = "Sangat bahagia hari ini"
neg_text = "Dasar anak sialan!! Kurang ajar!!"
result = sentiment_analysis(pos_text)
status = label_index[result[0]['label']]
score = result[0]['score']
print(f'Text: {pos_text} | Label : {status} ({score * 100:.3f}%)')
result = sentiment_analysis(neg_text)
status = label_index[result[0]['label']]
score = result[0]['score']
print(f'Text: {neg_text} | Label : {status} ({score * 100:.3f}%)')
```
|
huggingtweets/shonenpatties
|
huggingtweets
| 2021-09-14T06:05:23Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/shonenpatties/1631599507048/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1437598912987242499/ieZu5j9D_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cadrega</div>
<div style="text-align: center; font-size: 14px;">@shonenpatties</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cadrega.
| Data | Cadrega |
| --- | --- |
| Tweets downloaded | 3007 |
| Retweets | 1083 |
| Short tweets | 597 |
| Tweets kept | 1327 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26h0ua9i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shonenpatties's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10wvfto6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10wvfto6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/shonenpatties')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
monilouise/ner_news_portuguese
|
monilouise
| 2021-09-13T17:12:03Z
| 49
| 10
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"ner",
"pt",
"arxiv:1909.10649",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
language:
- pt
tags:
- ner
metrics:
- f1
- accuracy
- precision
- recall
---
# RiskData Brazilian Portuguese NER
## Model description
This is a finetunned version from [Neuralmind BERTimbau] (https://github.com/neuralmind-ai/portuguese-bert/blob/master/README.md) for Portuguese language.
## Intended uses & limitations
#### How to use
```python
from transformers import BertForTokenClassification, DistilBertTokenizerFast, pipeline
model = BertForTokenClassification.from_pretrained('monilouise/ner_pt_br')
tokenizer = DistilBertTokenizerFast.from_pretrained('neuralmind/bert-base-portuguese-cased'
, model_max_length=512
, do_lower_case=False
)
nlp = pipeline('ner', model=model, tokenizer=tokenizer, grouped_entities=True)
result = nlp("O Tribunal de Contas da União é localizado em Brasília e foi fundado por Rui Barbosa.")
```
#### Limitations and bias
- The finetunned model was trained on a corpus with around 180 news articles crawled from Google News. The original project's purpose was to recognize named entities in news
related to fraud and corruption, classifying these entities in four classes: PERSON, ORGANIZATION, PUBLIC INSITUITION and LOCAL (PESSOA, ORGANIZAÇÃO, INSTITUIÇÃO PÚBLICA and LOCAL).
## Training procedure
## Eval results
accuracy: 0.98,
precision: 0.86
recall: 0.91
f1: 0.88
The score was calculated using this code:
```python
def align_predictions(predictions: np.ndarray, label_ids: np.ndarray) -> Tuple[List[int], List[int]]:
preds = np.argmax(predictions, axis=2)
batch_size, seq_len = preds.shape
out_label_list = [[] for _ in range(batch_size)]
preds_list = [[] for _ in range(batch_size)]
for i in range(batch_size):
for j in range(seq_len):
if label_ids[i, j] != nn.CrossEntropyLoss().ignore_index:
out_label_list[i].append(id2tag[label_ids[i][j]])
preds_list[i].append(id2tag[preds[i][j]])
return preds_list, out_label_list
def compute_metrics(p: EvalPrediction) -> Dict:
preds_list, out_label_list = align_predictions(p.predictions, p.label_ids)
return {
"accuracy_score": accuracy_score(out_label_list, preds_list),
"precision": precision_score(out_label_list, preds_list),
"recall": recall_score(out_label_list, preds_list),
"f1": f1_score(out_label_list, preds_list),
}
```
### BibTeX entry and citation info
For further information about BERTimbau language model:
```bibtex
@inproceedings{souza2020bertimbau,
author = {Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@article{souza2019portuguese,
title={Portuguese Named Entity Recognition using BERT-CRF},
author={Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:1909.10649},
url={http://arxiv.org/abs/1909.10649},
year={2019}
}
```
|
Narrativa/distilroberta-finetuned-stereotype-detection
|
Narrativa
| 2021-09-13T14:52:21Z
| 7
| 4
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"stereotype",
"gender",
"gender_bias",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
- stereotype
- gender
- gender_bias
widget:
- text: "Cauterize is not just for fans of the guitarist or his other projects, but those that love music that is both aggressive and infectious and gave the album 4 out of 5 stars ."
metrics:
- accuracy
model-index:
- name: distilRoberta-stereotype
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.989151002901476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilRoberta-stereotype
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0651
- Accuracy: 0.9892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0783 | 1.0 | 5615 | 0.0703 | 0.9847 |
| 0.0468 | 2.0 | 11230 | 0.0573 | 0.9863 |
| 0.0316 | 3.0 | 16845 | 0.0580 | 0.9882 |
| 0.0172 | 4.0 | 22460 | 0.0591 | 0.9885 |
| 0.0098 | 5.0 | 28075 | 0.0651 | 0.9892 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
|
blizrys/biobert-v1.1-finetuned-pubmedqa-adapter
|
blizrys
| 2021-09-13T13:49:08Z
| 0
| 0
| null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model_index:
- name: biobert-v1.1-finetuned-pubmedqa-adapter
results:
- task:
name: Text Classification
type: text-classification
metric:
name: Accuracy
type: accuracy
value: 0.48
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-finetuned-pubmedqa-adapter
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0910
- Accuracy: 0.48
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.9848 | 0.58 |
| No log | 2.0 | 114 | 0.8537 | 0.58 |
| No log | 3.0 | 171 | 0.9565 | 0.42 |
| No log | 4.0 | 228 | 0.9659 | 0.56 |
| No log | 5.0 | 285 | 0.9763 | 0.6 |
| No log | 6.0 | 342 | 1.0647 | 0.66 |
| No log | 7.0 | 399 | 1.4305 | 0.6 |
| No log | 8.0 | 456 | 2.0545 | 0.56 |
| 0.6957 | 9.0 | 513 | 2.2438 | 0.5 |
| 0.6957 | 10.0 | 570 | 2.0910 | 0.48 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
SIKU-BERT/sikubert
|
SIKU-BERT
| 2021-09-13T13:34:40Z
| 419
| 10
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"roberta",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
---
language:
- "zh"
thumbnail: "https://raw.githubusercontent.com/SIKU-BERT/SikuBERT/main/appendix/sikubert.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "roberta"
- "pytorch"
inference: false
license: "apache-2.0"
---
# SikuBERT
## Model description

Digital humanities research needs the support of large-scale corpus and high-performance ancient Chinese natural language processing tools. The pre-training language model has greatly improved the accuracy of text mining in English and modern Chinese texts. At present, there is an urgent need for a pre-training model specifically for the automatic processing of ancient texts. We used the verified high-quality “Siku Quanshu” full-text corpus as the training set, based on the BERT deep language model architecture, we constructed the SikuBERT and SikuRoBERTa pre-training language models for intelligent processing tasks of ancient Chinese.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("SIKU-BERT/sikubert")
model = AutoModel.from_pretrained("SIKU-BERT/sikubert")
```
## About Us
We are from Nanjing Agricultural University.
> Created with by SIKU-BERT [](https://github.com/SIKU-BERT/SikuBERT-for-digital-humanities-and-classical-Chinese-information-processing)
|
Gregor/bert-base-multilingual-cased-wmt21-qe
|
Gregor
| 2021-09-13T11:20:52Z
| 3
| 0
|
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:quality_estimation/wmt21",
"bert",
"region:us"
] | null | 2022-03-02T23:29:04Z
|
---
tags:
- adapter-transformers
- adapterhub:quality_estimation/wmt21
- bert
---
# Adapter `Gregor/bert-base-multilingual-cased-wmt21-qe` for bert-base-multilingual-cased
An [adapter](https://adapterhub.ml) for the bert-base-multilingual-cased model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-multilingual-cased")
adapter_name = model.load_adapter("Gregor/bert-base-multilingual-cased-wmt21-qe")
model.active_adapters = adapter_name
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
huggingtweets/lux_capital
|
huggingtweets
| 2021-09-13T09:48:37Z
| 3
| 1
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/lux_capital/1631526513457/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/728194457632395264/rwtxA-v4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lux Capital</div>
<div style="text-align: center; font-size: 14px;">@lux_capital</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lux Capital.
| Data | Lux Capital |
| --- | --- |
| Tweets downloaded | 2329 |
| Retweets | 597 |
| Short tweets | 22 |
| Tweets kept | 1710 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3khqan1v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lux_capital's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gfkbn7u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gfkbn7u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lux_capital')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
eugenesiow/drln-bam
|
eugenesiow
| 2021-09-13T08:34:40Z
| 69,548
| 1
|
transformers
|
[
"transformers",
"DRLN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1906.12021",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- super-image
- image-super-resolution
datasets:
- eugenesiow/Div2k
- eugenesiow/Set5
- eugenesiow/Set14
- eugenesiow/BSD100
- eugenesiow/Urban100
metrics:
- pnsr
- ssim
---
# Densely Residual Laplacian Super-Resolution (DRLN)
DRLN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Densely Residual Laplacian Super-resolution](https://arxiv.org/abs/1906.12021) by Anwar et al. (2020) and first released in [this repository](https://github.com/saeed-anwar/DRLN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import DrlnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = DrlnModel.from_pretrained('eugenesiow/drln-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, DrlnModel, DrlnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = DrlnConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = DrlnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |drln-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.23/0.9614** |
|Set5 |3x |30.39/0.8678 |**35.3/0.9422** |
|Set5 |4x |28.42/0.8101 |**32.49/0.8986** |
|Set14 |2x |30.22/0.8683 |**33.95/0.9206** |
|Set14 |3x |27.53/0.7737 |**31.27/0.8624** |
|Set14 |4x |25.99/0.7023 |**28.94/0.7899** |
|BSD100 |2x |29.55/0.8425 |**33.95/0.9269** |
|BSD100 |3x |27.20/0.7382 |**29.78/0.8224** |
|BSD100 |4x |25.96/0.6672 |**28.63/0.7686** |
|Urban100 |2x |26.66/0.8408 |**32.81/0.9339** |
|Urban100 |3x | |**29.82/0.8828** |
|Urban100 |4x |23.14/0.6573 |**26.53/0.7991** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@misc{anwar2019densely,
title={Densely Residual Laplacian Super-Resolution},
author={Saeed Anwar and Nick Barnes},
year={2019},
eprint={1906.12021},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
|
anshengli2/DialogGPT-small-Bot
|
anshengli2
| 2021-09-13T05:39:55Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
tags:
- conversational
---
|
huggingartists/peter-paul-and-mary
|
huggingartists
| 2021-09-13T00:35:13Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/peter-paul-and-mary",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/peter-paul-and-mary
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/02fe78bca7c47dc6869673e7552c7978.500x338x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Peter, Paul and Mary</div>
<a href="https://genius.com/artists/peter-paul-and-mary">
<div style="text-align: center; font-size: 14px;">@peter-paul-and-mary</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Peter, Paul and Mary.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/peter-paul-and-mary).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/peter-paul-and-mary")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/svwa6bev/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Peter, Paul and Mary's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1s4mkr9x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1s4mkr9x/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/peter-paul-and-mary')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/peter-paul-and-mary")
model = AutoModelWithLMHead.from_pretrained("huggingartists/peter-paul-and-mary")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
blizrys/biobert-base-cased-v1.1-finetuned-pubmedqa
|
blizrys
| 2021-09-12T15:54:21Z
| 24
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: biobert-base-cased-v1.1-finetuned-pubmedqa
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-finetuned-pubmedqa
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3182
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8591 | 0.58 |
| No log | 2.0 | 114 | 0.9120 | 0.58 |
| No log | 3.0 | 171 | 0.8159 | 0.62 |
| No log | 4.0 | 228 | 1.1651 | 0.54 |
| No log | 5.0 | 285 | 1.2350 | 0.6 |
| No log | 6.0 | 342 | 1.5563 | 0.68 |
| No log | 7.0 | 399 | 2.0233 | 0.58 |
| No log | 8.0 | 456 | 2.2054 | 0.5 |
| 0.4463 | 9.0 | 513 | 2.2434 | 0.5 |
| 0.4463 | 10.0 | 570 | 2.3182 | 0.5 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
huggingartists/aikko
|
huggingartists
| 2021-09-12T14:10:49Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/aikko",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/aikko
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a1a40316d1405fa83df2a21923d64168.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">aikko</div>
<a href="https://genius.com/artists/aikko">
<div style="text-align: center; font-size: 14px;">@aikko</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from aikko.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/aikko).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/aikko")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1cfdpsrg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on aikko's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/oesyn53g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/oesyn53g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/aikko')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/aikko")
model = AutoModelWithLMHead.from_pretrained("huggingartists/aikko")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/krechet
|
huggingartists
| 2021-09-12T14:06:03Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/krechet",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/krechet
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/61181ccb60b6a0e1e7f8fb8ae2a2ab0a.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Krechet</div>
<a href="https://genius.com/artists/krechet">
<div style="text-align: center; font-size: 14px;">@krechet</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Krechet.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/krechet).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/krechet")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1c2yk38s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Krechet's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/39bxkroc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/39bxkroc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/krechet')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/krechet")
model = AutoModelWithLMHead.from_pretrained("huggingartists/krechet")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/5nizza
|
huggingartists
| 2021-09-12T12:34:26Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/5nizza",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/5nizza
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/289ded19d51d41798be99217d6059eb3.458x458x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">5’Nizza</div>
<a href="https://genius.com/artists/5nizza">
<div style="text-align: center; font-size: 14px;">@5nizza</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from 5’Nizza.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/5nizza).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/5nizza")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1zcp1grf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 5’Nizza's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2zg6pzw7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2zg6pzw7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/5nizza')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/5nizza")
model = AutoModelWithLMHead.from_pretrained("huggingartists/5nizza")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/mashina-vremeni
|
huggingartists
| 2021-09-12T12:25:03Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/mashina-vremeni",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/mashina-vremeni
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b780335021ab0e732601f25bd7a3d319.380x380x1.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Машина Времени (Mashina Vremeni)</div>
<a href="https://genius.com/artists/mashina-vremeni">
<div style="text-align: center; font-size: 14px;">@mashina-vremeni</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Машина Времени (Mashina Vremeni).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/mashina-vremeni).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/mashina-vremeni")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3r1yxrx7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Машина Времени (Mashina Vremeni)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1cgaltpc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1cgaltpc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/mashina-vremeni')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/mashina-vremeni")
model = AutoModelWithLMHead.from_pretrained("huggingartists/mashina-vremeni")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/lyapis-trubetskoy
|
huggingartists
| 2021-09-12T12:17:52Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/lyapis-trubetskoy",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/lyapis-trubetskoy
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/452918252959798bad82762cda0dc2d7.340x340x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ляпис Трубецкой (Lyapis Trubetskoy)</div>
<a href="https://genius.com/artists/lyapis-trubetskoy">
<div style="text-align: center; font-size: 14px;">@lyapis-trubetskoy</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Ляпис Трубецкой (Lyapis Trubetskoy).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/lyapis-trubetskoy).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/lyapis-trubetskoy")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ycs0usm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Ляпис Трубецкой (Lyapis Trubetskoy)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/uz1xtq0k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/uz1xtq0k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/lyapis-trubetskoy')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/lyapis-trubetskoy")
model = AutoModelWithLMHead.from_pretrained("huggingartists/lyapis-trubetskoy")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/our-last-night
|
huggingartists
| 2021-09-12T07:51:32Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/our-last-night",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/our-last-night
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/03627944481dcdb782595e9d3e351853.959x959x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Our Last Night</div>
<a href="https://genius.com/artists/our-last-night">
<div style="text-align: center; font-size: 14px;">@our-last-night</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Our Last Night.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/our-last-night).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/our-last-night")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/37o66f2j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Our Last Night's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1hifralf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1hifralf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/our-last-night')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/our-last-night")
model = AutoModelWithLMHead.from_pretrained("huggingartists/our-last-night")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
tarikul/distilbert-base-uncased-finetuned-squad
|
tarikul
| 2021-09-12T07:19:38Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
huggingartists/armin-van-buuren
|
huggingartists
| 2021-09-12T03:06:42Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/armin-van-buuren",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/armin-van-buuren
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b1a35069a1a44927425ef26c0bbda4a4.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Armin van Buuren</div>
<a href="https://genius.com/artists/armin-van-buuren">
<div style="text-align: center; font-size: 14px;">@armin-van-buuren</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Armin van Buuren.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/armin-van-buuren).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/armin-van-buuren")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/hrrfc55y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Armin van Buuren's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3q93rwo8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3q93rwo8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/armin-van-buuren')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/armin-van-buuren")
model = AutoModelWithLMHead.from_pretrained("huggingartists/armin-van-buuren")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/21-savage
|
huggingartists
| 2021-09-11T16:36:16Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/21-savage",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/21-savage
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/aa32202cc20d1dde62e57940a8b278b2.770x770x1.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">21 Savage</div>
<a href="https://genius.com/artists/21-savage">
<div style="text-align: center; font-size: 14px;">@21-savage</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from 21 Savage.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/21-savage).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/21-savage")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3lbkznnf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on 21 Savage's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1fw9b6m4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1fw9b6m4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/21-savage')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/21-savage")
model = AutoModelWithLMHead.from_pretrained("huggingartists/21-savage")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/ed-sheeran
|
huggingartists
| 2021-09-11T15:43:35Z
| 4
| 1
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/ed-sheeran",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/ed-sheeran
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b501daeff73d1b17610f47a5668f690a.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ed Sheeran</div>
<a href="https://genius.com/artists/ed-sheeran">
<div style="text-align: center; font-size: 14px;">@ed-sheeran</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Ed Sheeran.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/ed-sheeran).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/ed-sheeran")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3nju68bo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Ed Sheeran's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3hu7zc76) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3hu7zc76/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/ed-sheeran')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/ed-sheeran")
model = AutoModelWithLMHead.from_pretrained("huggingartists/ed-sheeran")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/aimer
|
huggingartists
| 2021-09-11T13:43:46Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/aimer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/aimer
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/123a0b2ef09a25207b610c5bd7b21d0f.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Aimer</div>
<a href="https://genius.com/artists/aimer">
<div style="text-align: center; font-size: 14px;">@aimer</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Aimer.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/aimer).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/aimer")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1rtjxc8q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Aimer's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2rguugmg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2rguugmg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/aimer')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/aimer")
model = AutoModelWithLMHead.from_pretrained("huggingartists/aimer")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/imagine-dragons
|
huggingartists
| 2021-09-11T13:36:33Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/imagine-dragons",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/imagine-dragons
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/ec1df125fd46ec3ef56f228df021a8cd.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Imagine Dragons</div>
<a href="https://genius.com/artists/imagine-dragons">
<div style="text-align: center; font-size: 14px;">@imagine-dragons</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Imagine Dragons.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/imagine-dragons).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/imagine-dragons")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/dln6ixis/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Imagine Dragons's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3cj3c8z1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3cj3c8z1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/imagine-dragons')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/imagine-dragons")
model = AutoModelWithLMHead.from_pretrained("huggingartists/imagine-dragons")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/sergei-letov
|
huggingartists
| 2021-09-11T12:13:08Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/sergei-letov",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/sergei-letov
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a5717aec4301e2adfb464d3b85701f74.300x300x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Сергей Летов (Sergei Letov)</div>
<a href="https://genius.com/artists/sergei-letov">
<div style="text-align: center; font-size: 14px;">@sergei-letov</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Сергей Летов (Sergei Letov).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/sergei-letov).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/sergei-letov")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1chw67j7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Сергей Летов (Sergei Letov)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/my7m2jp6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/my7m2jp6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/sergei-letov')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/sergei-letov")
model = AutoModelWithLMHead.from_pretrained("huggingartists/sergei-letov")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
airesearch/wangchanberta-base-wiki-newmm
|
airesearch
| 2021-09-11T09:39:18Z
| 636
| 2
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-newmm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-newmm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use wordl-level token from [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer namedly `newmm`. The total number of word-level tokens in the vocabulary is 97,982.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
airesearch/wangchanberta-base-wiki-sefr
|
airesearch
| 2021-09-11T09:39:05Z
| 52
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-sefr`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-sefr` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use Stacked Ensemble Filter and Refine (SEFR) tokenizer `(engine="best") `[[Limkonchotiwat et al., 2020]](https://www.aclweb.org/anthology/2020.emnlp-main.315/) based on probablities from CNN-based `deepcut` [[Kittinaradorn et al., 2019]](http://doi.org/10.5281/zenodo.3457707). The total number of word-level tokens in the vocabulary is 92,177.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
airesearch/wangchanberta-base-wiki-spm
|
airesearch
| 2021-09-11T09:38:49Z
| 190
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-spm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-spm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use subword token trained with [SentencePice](https://github.com/google/sentencepiece) library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
airesearch/wangchanberta-base-wiki-syllable
|
airesearch
| 2021-09-11T09:38:36Z
| 60
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-syllable`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-syllable` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use a Thai syllable-level dictionary-based tokenizer denoted as `syllable` from PyThaiNLP [Phatthiyaphaibun et al., 2016]. The total number of word-level tokens in the vocabulary is 59,235.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa
|
airesearch
| 2021-09-11T09:28:19Z
| 111
| 0
|
transformers
|
[
"transformers",
"pytorch",
"camembert",
"question-answering",
"th",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z
|
---
language: th
widget:
- text: "สวนกุหลาบเป็นโรงเรียนอะไร"
context: "โรงเรียนสวนกุหลาบวิทยาลัย (Suankularb Wittayalai School) (อักษรย่อ : ส.ก. / S.K.) เป็นโรงเรียนชายล้วน ระดับชั้นมัธยมศึกษาขนาดใหญ่พิเศษ สังกัดสำนักงานเขตพื้นที่การศึกษามัธยมศึกษาเขต 1 สำนักงานคณะกรรมการการศึกษาขั้นพื้นฐาน (ชื่อเดิม: กรมสามัญศึกษา) กระทรวงศึกษาธิการ ก่อตั้งโดย พระบาทสมเด็จพระจุลจอมเกล้าเจ้าอยู่หัว ได้รับการสถาปนาขึ้นในวันที่ 8 มีนาคม พ.ศ. 2424 (ขณะนั้นนับวันที่ 1 เมษายน เป็นวันขึ้นปีใหม่ เมื่อนับอย่างสากลถือเป็น พ.ศ. 2425) โดยเป็นโรงเรียนรัฐบาลแห่งแรกของประเทศไทย"
---
# wangchanberta-base-wiki-20210520-spm-finetune-qa
Finetuning `airesearchth/wangchanberta-base-wiki-20210520-spmd` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \\n --model_name $MODEL_NAME \\n --dataset_name chimera_qa \\n --output_dir $MODEL_NAME-finetune-chimera_qa-model \\n --log_dir $MODEL_NAME-finetune-chimera_qa-log \\n --model_max_length 400 \\n --pad_on_right \\n --fp16
```
|
harr/distilbert-base-uncased-finetuned-ingredients
|
harr
| 2021-09-11T09:20:35Z
| 9
| 5
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:ingredients_yes_no",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ingredients_yes_no
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ingredients
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ingredients_yes_no
type: ingredients_yes_no
args: IngredientsYesNo
metrics:
- name: Precision
type: precision
value: 0.9898648648648649
- name: Recall
type: recall
value: 0.9932203389830508
- name: F1
type: f1
value: 0.9915397631133671
- name: Accuracy
type: accuracy
value: 0.9978308026030369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ingredients
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ingredients_yes_no dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0105
- Precision: 0.9899
- Recall: 0.9932
- F1: 0.9915
- Accuracy: 0.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 47 | 0.2783 | 0.4 | 0.5492 | 0.4629 | 0.8910 |
| No log | 2.0 | 94 | 0.1089 | 0.8145 | 0.8780 | 0.8450 | 0.9718 |
| No log | 3.0 | 141 | 0.0273 | 0.9865 | 0.9932 | 0.9899 | 0.9973 |
| No log | 4.0 | 188 | 0.0168 | 0.9865 | 0.9932 | 0.9899 | 0.9973 |
| No log | 5.0 | 235 | 0.0156 | 0.9865 | 0.9898 | 0.9882 | 0.9957 |
| No log | 6.0 | 282 | 0.0129 | 0.9865 | 0.9932 | 0.9899 | 0.9973 |
| No log | 7.0 | 329 | 0.0121 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
| No log | 8.0 | 376 | 0.0115 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
| No log | 9.0 | 423 | 0.0108 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
| No log | 10.0 | 470 | 0.0105 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
arogyaGurkha/koelectra-base-discriminator-finetuned-squad_kor_v1
|
arogyaGurkha
| 2021-09-11T08:34:39Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad_kor_v1",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z
|
---
tags:
- generated_from_trainer
datasets:
- squad_kor_v1
model-index:
- name: koelectra-base-discriminator-finetuned-squad_kor_v1
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad_kor_v1
type: squad_kor_v1
args: squad_kor_v1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-base-discriminator-finetuned-squad_kor_v1
This model is a fine-tuned version of [monologg/koelectra-base-discriminator](https://huggingface.co/monologg/koelectra-base-discriminator) on the squad_kor_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5774 | 1.0 | 4025 | 0.5589 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
huggingartists/the-the-pigs
|
huggingartists
| 2021-09-11T06:35:36Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/the-the-pigs",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/the-the-pigs
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/2f1fd1b951237ad3387096f392d41fa5.720x720x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The ‘’Вепри’’ (The Pigs)</div>
<a href="https://genius.com/artists/the-the-pigs">
<div style="text-align: center; font-size: 14px;">@the-the-pigs</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from The ‘’Вепри’’ (The Pigs).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-the-pigs).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-the-pigs")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/7yh65db9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The ‘’Вепри’’ (The Pigs)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/65gj1lk1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/65gj1lk1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/the-the-pigs')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-the-pigs")
model = AutoModelWithLMHead.from_pretrained("huggingartists/the-the-pigs")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
megagonlabs/optimus-amzn
|
megagonlabs
| 2021-09-11T00:16:57Z
| 12
| 1
|
transformers
|
[
"transformers",
"pytorch",
"summarization",
"en",
"license:bsd-3-clause",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z
|
---
language: en
tags:
- summarization
inference: false
license: bsd-3-clause
---
## Optimus model
See original GitHub repo for more details [here](https://github.com/megagonlabs/coop)
|
megagonlabs/optimus-yelp
|
megagonlabs
| 2021-09-11T00:16:32Z
| 10
| 0
|
transformers
|
[
"transformers",
"pytorch",
"summarization",
"en",
"license:bsd-3-clause",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z
|
---
language: en
tags:
- summarization
inference: false
license: bsd-3-clause
---
## Optimus model
See original GitHub repo for more details [here](https://github.com/megagonlabs/coop)
|
huggingartists/slava-kpss
|
huggingartists
| 2021-09-10T16:22:01Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/slava-kpss",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/slava-kpss
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e63e3a804916ed71bf2941ac4e190063.847x847x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Слава КПСС (Slava KPSS)</div>
<a href="https://genius.com/artists/slava-kpss">
<div style="text-align: center; font-size: 14px;">@slava-kpss</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Слава КПСС (Slava KPSS).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/slava-kpss).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/slava-kpss")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2f2r3u3b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Слава КПСС (Slava KPSS)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/pecxkpae) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/pecxkpae/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/slava-kpss')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/slava-kpss")
model = AutoModelWithLMHead.from_pretrained("huggingartists/slava-kpss")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/lil-peep
|
huggingartists
| 2021-09-10T14:54:32Z
| 4
| 1
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/lil-peep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/lil-peep
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/919c7ba130d3861740cbe7fbd7f83c59.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lil Peep</div>
<a href="https://genius.com/artists/lil-peep">
<div style="text-align: center; font-size: 14px;">@lil-peep</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Lil Peep.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/lil-peep).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/lil-peep")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/39q6kspr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lil Peep's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/g0nxk974) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/g0nxk974/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/lil-peep')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/lil-peep")
model = AutoModelWithLMHead.from_pretrained("huggingartists/lil-peep")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/skillet
|
huggingartists
| 2021-09-10T14:51:47Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/skillet",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/skillet
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c42b7baa88dae01013eebc53c0aed177.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Skillet</div>
<a href="https://genius.com/artists/skillet">
<div style="text-align: center; font-size: 14px;">@skillet</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Skillet.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/skillet).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/skillet")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1wmbkzn8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Skillet's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3jke6b6i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3jke6b6i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/skillet')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/skillet")
model = AutoModelWithLMHead.from_pretrained("huggingartists/skillet")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/john-lennon
|
huggingartists
| 2021-09-10T10:37:44Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/john-lennon",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/john-lennon
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/de14b272004b51dea8071e7cba21cbac.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">John Lennon</div>
<a href="https://genius.com/artists/john-lennon">
<div style="text-align: center; font-size: 14px;">@john-lennon</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from John Lennon.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/john-lennon).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/john-lennon")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/f3d8fseh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on John Lennon's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/36mtogkg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/36mtogkg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/john-lennon')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/john-lennon")
model = AutoModelWithLMHead.from_pretrained("huggingartists/john-lennon")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/the-velvet-underground
|
huggingartists
| 2021-09-10T09:04:08Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/the-velvet-underground",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/the-velvet-underground
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://s3.amazonaws.com/rapgenius/vu.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Velvet Underground</div>
<a href="https://genius.com/artists/the-velvet-underground">
<div style="text-align: center; font-size: 14px;">@the-velvet-underground</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from The Velvet Underground.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-velvet-underground).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-velvet-underground")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/lbkqy84q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Velvet Underground's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1e4s74q4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1e4s74q4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/the-velvet-underground')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-velvet-underground")
model = AutoModelWithLMHead.from_pretrained("huggingartists/the-velvet-underground")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/enigma
|
huggingartists
| 2021-09-10T08:57:05Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/enigma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/enigma
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/4b5472082f220eb9c2ca6b22f4d12f45.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Enigma</div>
<a href="https://genius.com/artists/enigma">
<div style="text-align: center; font-size: 14px;">@enigma</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Enigma.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/enigma).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/enigma")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/8bx90lw6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Enigma's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1c1t20ji) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1c1t20ji/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/enigma')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/enigma")
model = AutoModelWithLMHead.from_pretrained("huggingartists/enigma")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/kipelov
|
huggingartists
| 2021-09-10T08:40:56Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/kipelov",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/kipelov
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/d4ae6ad73ca63bc97b2a10dfefc47b63.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Кипелов (Kipelov)</div>
<a href="https://genius.com/artists/kipelov">
<div style="text-align: center; font-size: 14px;">@kipelov</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Кипелов (Kipelov).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/kipelov).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/kipelov")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/225m5y65/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Кипелов (Kipelov)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/38es269x) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/38es269x/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/kipelov')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/kipelov")
model = AutoModelWithLMHead.from_pretrained("huggingartists/kipelov")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/grigory-leps
|
huggingartists
| 2021-09-10T08:13:40Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/grigory-leps",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/grigory-leps
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f30e8944a06a196868ee4b077a7926a6.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Григорий Лепс (Grigory Leps)</div>
<a href="https://genius.com/artists/grigory-leps">
<div style="text-align: center; font-size: 14px;">@grigory-leps</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Григорий Лепс (Grigory Leps).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/grigory-leps).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/grigory-leps")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/32wqexib/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Григорий Лепс (Grigory Leps)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1j0f6nwb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1j0f6nwb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/grigory-leps')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/grigory-leps")
model = AutoModelWithLMHead.from_pretrained("huggingartists/grigory-leps")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/vladimir-vysotsky
|
huggingartists
| 2021-09-10T07:47:12Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/vladimir-vysotsky",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/vladimir-vysotsky
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/18735fe10bace7b3f615b2da9c95ac73.938x938x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Владимир Высоцкий (Vladimir Vysotsky)</div>
<a href="https://genius.com/artists/vladimir-vysotsky">
<div style="text-align: center; font-size: 14px;">@vladimir-vysotsky</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Владимир Высоцкий (Vladimir Vysotsky).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/vladimir-vysotsky).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/vladimir-vysotsky")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1w1qc649/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Владимир Высоцкий (Vladimir Vysotsky)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1inrl5qe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1inrl5qe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/vladimir-vysotsky')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/vladimir-vysotsky")
model = AutoModelWithLMHead.from_pretrained("huggingartists/vladimir-vysotsky")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/mf-doom
|
huggingartists
| 2021-09-10T07:07:44Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/mf-doom",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
datasets:
- huggingartists/mf-doom
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/263743633b6e58854e753b25dca6beab.430x430x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MF DOOM</div>
<a href="https://genius.com/artists/mf-doom">
<div style="text-align: center; font-size: 14px;">@mf-doom</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from MF DOOM.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/mf-doom).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/mf-doom")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3lhrsfds/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on MF DOOM's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/vw48qbeh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/mf-doom')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/mf-doom")
model = AutoModelWithLMHead.from_pretrained("huggingartists/mf-doom")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
juanfiguera/ice_cream
|
juanfiguera
| 2021-09-10T02:59:58Z
| 66
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ice_cream
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5166666507720947
---
# ice_cream
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chocolate ice cream

#### vanilla ice cream

|
dattam/DialoGPT-medium-TonyStarkBot
|
dattam
| 2021-09-09T18:05:36Z
| 4
| 3
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
tags:
- conversational
---
# Tony Stark DialoGPT model
Invite me to your discord server : https://discord.com/api/oauth2/authorize?client_id=885065886787063848&permissions=137439365184&scope=bot
|
sevbqewre/hyou
|
sevbqewre
| 2021-09-09T17:57:48Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z
|
https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-full-movie-watch-online-download-hdrip
https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-online-full-free-download
https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-full-movie-download-hd
https://maccaboard.paulmccartney.com/users/download-shang-chi-2021-online-full-movie-free
https://maccaboard.paulmccartney.com/users/123movies-online-shang-chi-2021-watch-free-hd
https://maccaboard.paulmccartney.com/users/shang-chi-hindi-dubbed-movie-download-full-hd-720p-telegram
https://maccaboard.paulmccartney.com/users/watch-shang-chi-hindi-dubbed-movie-download-full-hd-720p-0
https://maccaboard.paulmccartney.com/users/hindi-dubbed-watch-shang-chi-2021-full-hd-movie-online-free
https://maccaboard.paulmccartney.com/users/download-shang-chi-2021-torrent-movie-free-hd-yts
https://maccaboard.paulmccartney.com/users/download-720p-shang-chi-2021-full-movie-watch-free
https://maccaboard.paulmccartney.com/users/watch-shang-chi-2021-full-movie-hd-online-free-download
https://maccaboard.paulmccartney.com/users/download-shang-chi-full-movie-and-watch-online-free-hd-720p
https://maccaboard.paulmccartney.com/users/123movies-free-shang-chi-2021-online-download-full-hd
|
Aleksandar/distilbert-srb-ner
|
Aleksandar
| 2021-09-09T06:27:16Z
| 14
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"sr",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z
|
---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
language:
- sr
model_index:
- name: distilbert-srb-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: sr
metric:
name: Accuracy
type: accuracy
value: 0.9576561462374611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2972
- Precision: 0.8871
- Recall: 0.9100
- F1: 0.8984
- Accuracy: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3818 | 1.0 | 625 | 0.2175 | 0.8175 | 0.8370 | 0.8272 | 0.9306 |
| 0.198 | 2.0 | 1250 | 0.1766 | 0.8551 | 0.8732 | 0.8640 | 0.9458 |
| 0.1423 | 3.0 | 1875 | 0.1702 | 0.8597 | 0.8763 | 0.8679 | 0.9473 |
| 0.079 | 4.0 | 2500 | 0.1774 | 0.8674 | 0.8875 | 0.8773 | 0.9515 |
| 0.0531 | 5.0 | 3125 | 0.2011 | 0.8688 | 0.8965 | 0.8825 | 0.9522 |
| 0.0429 | 6.0 | 3750 | 0.2082 | 0.8769 | 0.8970 | 0.8868 | 0.9538 |
| 0.032 | 7.0 | 4375 | 0.2268 | 0.8764 | 0.8916 | 0.8839 | 0.9528 |
| 0.0204 | 8.0 | 5000 | 0.2423 | 0.8726 | 0.8959 | 0.8841 | 0.9529 |
| 0.0148 | 9.0 | 5625 | 0.2522 | 0.8774 | 0.8991 | 0.8881 | 0.9538 |
| 0.0125 | 10.0 | 6250 | 0.2544 | 0.8823 | 0.9024 | 0.8922 | 0.9559 |
| 0.0108 | 11.0 | 6875 | 0.2592 | 0.8780 | 0.9041 | 0.8909 | 0.9553 |
| 0.007 | 12.0 | 7500 | 0.2672 | 0.8877 | 0.9056 | 0.8965 | 0.9571 |
| 0.0048 | 13.0 | 8125 | 0.2714 | 0.8879 | 0.9089 | 0.8982 | 0.9583 |
| 0.0049 | 14.0 | 8750 | 0.2872 | 0.8873 | 0.9068 | 0.8970 | 0.9573 |
| 0.0034 | 15.0 | 9375 | 0.2915 | 0.8883 | 0.9114 | 0.8997 | 0.9577 |
| 0.0027 | 16.0 | 10000 | 0.2890 | 0.8865 | 0.9103 | 0.8983 | 0.9581 |
| 0.0028 | 17.0 | 10625 | 0.2885 | 0.8877 | 0.9085 | 0.8980 | 0.9576 |
| 0.0014 | 18.0 | 11250 | 0.2928 | 0.8860 | 0.9073 | 0.8965 | 0.9577 |
| 0.0013 | 19.0 | 11875 | 0.2963 | 0.8856 | 0.9099 | 0.8976 | 0.9576 |
| 0.001 | 20.0 | 12500 | 0.2972 | 0.8871 | 0.9100 | 0.8984 | 0.9577 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.1
|
eugenesiow/han
|
eugenesiow
| 2021-09-09T01:59:04Z
| 150
| 0
|
transformers
|
[
"transformers",
"HAN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:2008.08767",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- super-image
- image-super-resolution
datasets:
- eugenesiow/Div2k
- eugenesiow/Set5
- eugenesiow/Set14
- eugenesiow/BSD100
- eugenesiow/Urban100
metrics:
- pnsr
- ssim
---
# Holistic Attention Network (HAN)
HAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Single Image Super-Resolution via a Holistic Attention Network](https://arxiv.org/abs/2008.08767) by Niu et al. (2020) and first released in [this repository](https://github.com/wwlCape/HAN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super- resolution approaches.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import HanModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = HanModel.from_pretrained('eugenesiow/han', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, HanModel, HanConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = HanConfig(
scale=4, # train a model to upscale 4x
)
model = HanModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |han |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**** |
|Set5 |3x |30.39/0.8678 |**** |
|Set5 |4x |28.42/0.8101 |**31.21/0.8778** |
|Set14 |2x |30.22/0.8683 |**** |
|Set14 |3x |27.53/0.7737 |**** |
|Set14 |4x |25.99/0.7023 |**28.18/0.7712** |
|BSD100 |2x |29.55/0.8425 |**** |
|BSD100 |3x |27.20/0.7382 |**** |
|BSD100 |4x |25.96/0.6672 |**28.09/0.7533** |
|Urban100 |2x |26.66/0.8408 |**** |
|Urban100 |3x | |**** |
|Urban100 |4x |23.14/0.6573 |**25.1/0.7497** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{niu2020single,
title={Single Image Super-Resolution via a Holistic Attention Network},
author={Ben Niu and Weilei Wen and Wenqi Ren and Xiangde Zhang and Lianping Yang and Shuzhen Wang and Kaihao Zhang and Xiaochun Cao and Haifeng Shen},
year={2020},
eprint={2008.08767},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
|
elisno/is_core_web_trf
|
elisno
| 2021-09-08T21:19:54Z
| 4
| 0
|
spacy
|
[
"spacy",
"token-classification",
"is",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z
|
---
tags:
- spacy
- token-classification
language:
- is
model-index:
- name: is_core_web_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9193318395
- name: NER Recall
type: recall
value: 0.9217728758
- name: NER F Score
type: f_score
value: 0.9205507394
---
| Feature | Description |
| --- | --- |
| **Name** | `is_core_web_trf` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `transformer`, `ner`, `tagger`, `parser` |
| **Components** | `transformer`, `ner`, `tagger`, `parser` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (591 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `Date`, `Location`, `Miscellaneous`, `Money`, `Organization`, `Percent`, `Person`, `Time` |
| **`tagger`** | `aa`, `aae`, `aam`, `af`, `afe`, `afm`, `au`, `c`, `cn`, `ct`, `e`, `fahee`, `fahen`, `faheo`, `faheþ`, `fahfe`, `fahfn`, `fahfo`, `fahfþ`, `fakee`, `faken`, `fakeo`, `fakeþ`, `fakfe`, `fakfn`, `fakfo`, `fakfþ`, `favee`, `faven`, `faveo`, `faveþ`, `favfe`, `favfn`, `favfo`, `favfþ`, `fbhee`, `fbhen`, `fbheo`, `fbheþ`, `fbhfe`, `fbhfn`, `fbhfo`, `fbhfþ`, `fbkee`, `fbken`, `fbkeo`, `fbkeþ`, `fbkfe`, `fbkfn`, `fbkfo`, `fbkfþ`, `fbvee`, `fbven`, `fbveo`, `fbveþ`, `fbvfe`, `fbvfn`, `fbvfo`, `fbvfþ`, `fehee`, `fehen`, `feheo`, `feheþ`, `fehfe`, `fehfn`, `fehfo`, `fehfþ`, `fekee`, `feken`, `fekeo`, `fekeþ`, `fekfe`, `fekfn`, `fekfo`, `fekfþ`, `fevee`, `feven`, `feveo`, `feveþ`, `fevfe`, `fevfn`, `fevfo`, `fevfþ`, `fohee`, `fohen`, `foheo`, `foheþ`, `fohfe`, `fohfn`, `fohfo`, `fohfþ`, `fokee`, `foken`, `fokeo`, `fokeþ`, `fokfe`, `fokfn`, `fokfo`, `fokfþ`, `fovee`, `foven`, `foveo`, `foveþ`, `fovfe`, `fovfn`, `fovfo`, `fovfþ`, `fp1ee`, `fp1en`, `fp1eo`, `fp1eþ`, `fp1fe`, `fp1fn`, `fp1fo`, `fp1fþ`, `fp2ee`, `fp2en`, `fp2eo`, `fp2eþ`, `fp2fe`, `fp2fn`, `fp2fo`, `fp2fþ`, `fphee`, `fphen`, `fpheo`, `fpheþ`, `fphfe`, `fphfn`, `fphfo`, `fphfþ`, `fpkee`, `fpken`, `fpkeo`, `fpkeþ`, `fpkfe`, `fpkfn`, `fpkfo`, `fpkfþ`, `fpvee`, `fpven`, `fpveo`, `fpveþ`, `fpvfe`, `fpvfn`, `fpvfo`, `fpvfþ`, `fshee`, `fshen`, `fsheo`, `fsheþ`, `fshfe`, `fshfn`, `fshfo`, `fshfþ`, `fskee`, `fsken`, `fskeo`, `fskeþ`, `fskfe`, `fskfn`, `fskfo`, `fskfþ`, `fsvee`, `fsven`, `fsveo`, `fsveþ`, `fsvfe`, `fsvfn`, `fsvfo`, `fsvfþ`, `ghee`, `ghen`, `gheo`, `gheþ`, `ghfe`, `ghfn`, `ghfo`, `ghfþ`, `gkee`, `gken`, `gkeo`, `gkeþ`, `gkfe`, `gkfn`, `gkfo`, `gkfþ`, `gvee`, `gven`, `gveo`, `gveþ`, `gvfe`, `gvfn`, `gvfo`, `gvfþ`, `ks`, `kt`, `lheeof`, `lheesf`, `lheeve`, `lheevf`, `lheevm`, `lhenof`, `lhense`, `lhensf`, `lhenve`, `lhenvf`, `lhenvm`, `lheoof`, `lheose`, `lheosf`, `lheosm`, `lheove`, `lheovf`, `lheovm`, `lheþof`, `lheþse`, `lheþsf`, `lheþve`, `lheþvf`, `lheþvm`, `lhfeof`, `lhfese`, `lhfesf`, `lhfeve`, `lhfevf`, `lhfevm`, `lhfnof`, `lhfnse`, `lhfnsf`, `lhfnve`, `lhfnvf`, `lhfnvm`, `lhfoof`, `lhfose`, `lhfosf`, `lhfove`, `lhfovf`, `lhfovm`, `lhfþof`, `lhfþse`, `lhfþsf`, `lhfþve`, `lhfþvf`, `lhfþvm`, `lkeeof`, `lkeesf`, `lkeeve`, `lkeevf`, `lkeevm`, `lkenof`, `lkense`, `lkensf`, `lkenve`, `lkenvf`, `lkenvm`, `lkeoof`, `lkeose`, `lkeosf`, `lkeove`, `lkeovf`, `lkeovm`, `lkeþof`, `lkeþse`, `lkeþsf`, `lkeþve`, `lkeþvf`, `lkeþvm`, `lkfeof`, `lkfese`, `lkfesf`, `lkfeve`, `lkfevf`, `lkfevm`, `lkfnof`, `lkfnse`, `lkfnsf`, `lkfnve`, `lkfnvf`, `lkfnvm`, `lkfoof`, `lkfose`, `lkfosf`, `lkfove`, `lkfovf`, `lkfovm`, `lkfþof`, `lkfþse`, `lkfþsf`, `lkfþsm`, `lkfþve`, `lkfþvf`, `lkfþvm`, `lveeof`, `lveese`, `lveesf`, `lveeve`, `lveevf`, `lveevm`, `lvenof`, `lvense`, `lvensf`, `lvenve`, `lvenvf`, `lvenvm`, `lveoof`, `lveose`, `lveosf`, `lveove`, `lveovf`, `lveovm`, `lveþof`, `lveþse`, `lveþsf`, `lveþve`, `lveþvf`, `lveþvm`, `lvfeof`, `lvfese`, `lvfesf`, `lvfeve`, `lvfevf`, `lvfevm`, `lvfnof`, `lvfnse`, `lvfnsf`, `lvfnve`, `lvfnvf`, `lvfnvm`, `lvfoof`, `lvfose`, `lvfosf`, `lvfove`, `lvfovf`, `lvfovm`, `lvfþof`, `lvfþse`, `lvfþsf`, `lvfþsm`, `lvfþve`, `lvfþvf`, `lvfþvm`, `m`, `n----s`, `n-ee`, `n-ee-s`, `n-en`, `n-en-s`, `n-eng`, `n-eo`, `n-eo-s`, `n-eþ`, `n-eþ-s`, `n-fn`, `nhee`, `nhee-s`, `nheeg`, `nheegs`, `nhen`, `nhen-s`, `nheng`, `nhengs`, `nheo`, `nheo-s`, `nheog`, `nheogs`, `nheþ`, `nheþ-s`, `nheþg`, `nheþgs`, `nhfe`, `nhfe-s`, `nhfeg`, `nhfegs`, `nhfn`, `nhfn-s`, `nhfng`, `nhfngs`, `nhfo`, `nhfo-s`, `nhfog`, `nhfogs`, `nhfþ`, `nhfþ-s`, `nhfþg`, `nhfþgs`, `nkee`, `nkee-s`, `nkeeg`, `nkeegs`, `nken`, `nken-s`, `nkeng`, `nkengs`, `nkeo`, `nkeo-s`, `nkeog`, `nkeogs`, `nkeþ`, `nkeþ-s`, `nkeþg`, `nkeþgs`, `nkfe`, `nkfe-s`, `nkfeg`, `nkfegs`, `nkfn`, `nkfn-s`, `nkfng`, `nkfngs`, `nkfo`, `nkfo-s`, `nkfog`, `nkfogs`, `nkfþ`, `nkfþ-s`, `nkfþg`, `nkfþgs`, `nvee`, `nvee-s`, `nveeg`, `nveegs`, `nven`, `nven-s`, `nveng`, `nvengs`, `nveo`, `nveo-s`, `nveog`, `nveogs`, `nveþ`, `nveþ-s`, `nveþg`, `nveþgs`, `nvfe`, `nvfe-s`, `nvfeg`, `nvfegs`, `nvfn`, `nvfn-s`, `nvfng`, `nvfngs`, `nvfo`, `nvfo-s`, `nvfog`, `nvfogs`, `nvfþ`, `nvfþ-s`, `nvfþg`, `nvfþgs`, `pa`, `pg`, `pk`, `pl`, `sbg2en`, `sbg2fn`, `sbm2en`, `sbm2fn`, `sfg1en`, `sfg1eþ`, `sfg1fn`, `sfg1fþ`, `sfg2en`, `sfg2eþ`, `sfg2fn`, `sfg2fþ`, `sfg3en`, `sfg3eþ`, `sfg3fn`, `sfg3fþ`, `sfm1en`, `sfm1eþ`, `sfm1fn`, `sfm1fþ`, `sfm2en`, `sfm2eþ`, `sfm2fn`, `sfm2fþ`, `sfm3en`, `sfm3eþ`, `sfm3fn`, `sfm3fþ`, `slg`, `sng`, `snm`, `svg1en`, `svg1eþ`, `svg1fn`, `svg1fþ`, `svg2en`, `svg2eþ`, `svg2fn`, `svg2fþ`, `svg3en`, `svg3eþ`, `svg3fn`, `svg3fþ`, `svm1en`, `svm1eþ`, `svm1fn`, `svm1fþ`, `svm2en`, `svm2eþ`, `svm2fn`, `svm3en`, `svm3eþ`, `svm3fn`, `svm3fþ`, `sþghen`, `sþgheo`, `sþghfn`, `sþghfo`, `sþgken`, `sþgkeo`, `sþgkfn`, `sþgkfo`, `sþgven`, `sþgveo`, `sþgvfn`, `sþgvfo`, `sþgvfþ`, `sþmhen`, `sþmheo`, `sþmken`, `sþmven`, `ta`, `tfhee`, `tfhen`, `tfheo`, `tfheþ`, `tfhfe`, `tfhfn`, `tfhfo`, `tfhfþ`, `tfkee`, `tfken`, `tfkeo`, `tfkeþ`, `tfkfe`, `tfkfn`, `tfkfo`, `tfkfþ`, `tfvee`, `tfven`, `tfveo`, `tfveþ`, `tfvfe`, `tfvfn`, `tfvfo`, `tfvfþ`, `to`, `tp`, `v`, `x` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `fixed`, `flat:name`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:arg`, `parataxis`, `punct`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 92.06 |
| `ENTS_P` | 91.93 |
| `ENTS_R` | 92.18 |
| `TRANSFORMER_LOSS` | 248325.98 |
| `NER_LOSS` | 120059.07 |
|
LeoCordoba/beto2beto
|
LeoCordoba
| 2021-09-08T16:31:21Z
| 23
| 0
|
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"text-generation",
"spanish",
"beto",
"es",
"dataset:LeoCordoba/CC-NEWS-ES",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z
|
---
language: es
tags:
- text-generation
- spanish
- encoder-decoder
- beto
license: apache-2.0
datasets:
- LeoCordoba/CC-NEWS-ES
model-index:
- name: beto2beto
---
## beto2beto
Usage example here: https://colab.research.google.com/drive/18a2ZfF1e_Kyyydlv8INQIkJbv294xcAm?usp=sharing
Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40•Decoder max length: 128
## Hyperparameters
## Usage
## Results
| key | value |
| --- | ----- |
| test_loss | 2.65148806571960452 |
|
sv/gpt2-nft-poetry
|
sv
| 2021-09-08T16:15:47Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt2-nft-poetry
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-nft-poetry
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 282 | 4.3092 |
| 4.5403 | 2.0 | 564 | 4.1283 |
| 4.5403 | 3.0 | 846 | 4.0605 |
| 4.039 | 4.0 | 1128 | 4.0321 |
| 4.039 | 5.0 | 1410 | 4.0243 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
fihtrotuld/123
|
fihtrotuld
| 2021-09-08T01:35:59Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z
|
import requests
API_URL = "https://api-inference.huggingface.co/models/huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad"
headers = {"Authorization": "Bearer api_UXqrzQBiZKXaWxstVwEKcYvHQpGSGiQGbr"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": {
"question": "What's my name?",
"context": "My name is Clara and I live in Berkeley.",
},
})
|
nateraw/timm-adv_inception_v3
|
nateraw
| 2021-09-07T19:26:53Z
| 0
| 0
|
timm
|
[
"timm",
"image-classification",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z
|
---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for `timm-resnet50-beans`
**TODO**
**For now, try dragging and dropping this image into the inference widget. It should classify as angular_leaf_spot.**

|
mlkorra/OGBV-gender-bert-hi-en
|
mlkorra
| 2021-09-07T15:13:25Z
| 11
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
## BERT Model for OGBV gendered text classification
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mlkorra/OGBV-gender-bert-hi-en")
model = AutoModelForSequenceClassification.from_pretrained("mlkorra/OGBV-gender-bert-hi-en")
```
## Model Performance
|Metric|dev|test|
|---|--|--|
|Accuracy|0.88|0.81|
|F1(weighted)|0.86|0.80|
|
kamalkraj/bioelectra-base-discriminator-pubmed
|
kamalkraj
| 2021-09-07T13:52:16Z
| 810
| 6
|
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
## BioELECTRA:Pretrained Biomedical text Encoder using Discriminators
Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. In this paper, we introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA (Clark et al., 2020) for the Biomedical domain. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34%(1.39% accuracy improvement) on MedNLI and 64% (2.98% accuracy improvement) on PubMedQA dataset.
For a detailed description and experimental results, please refer to our paper [BioELECTRA:Pretrained Biomedical text Encoder using Discriminators](https://www.aclweb.org/anthology/2021.bionlp-1.16/).
Cite our paper using below citation
```
@inproceedings{kanakarajan-etal-2021-bioelectra,
title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators",
author = "Kanakarajan, Kamal raj and
Kundumani, Bhuvana and
Sankarasubbu, Malaikannan",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.bionlp-1.16",
doi = "10.18653/v1/2021.bionlp-1.16",
pages = "143--154",
abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.",
}
```
## How to use the discriminator in `transformers`
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
tokenizer = ElectraTokenizerFast.from_pretrained("kamalkraj/bioelectra-base-discriminator-pubmed")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions[0].tolist()]
```
|
pritoms/gpt-neo-125M-finetuned-pgt
|
pritoms
| 2021-09-07T08:20:52Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: gpt-neo-125M-finetuned-pgt
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-finetuned-pgt
This model is a fine-tuned version of [pritoms/gpt-neo-125M-finetuned-pgt](https://huggingface.co/pritoms/gpt-neo-125M-finetuned-pgt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 26 | 1.5947 |
| No log | 2.0 | 52 | 1.5963 |
| No log | 3.0 | 78 | 1.6026 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
MaryaAI/opus-mt-ar-en-finetuned-ar-to-en
|
MaryaAI
| 2021-09-07T07:26:24Z
| 251
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z
|
---
tags:
- generated_from_trainer
datasets:
- opus_wikipedia
model-index:
- name: opus-mt-ar-en-finetuned-ar-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_wikipedia
type: opus_wikipedia
args: ar-en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_wikipedia dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
sabhi/t5-base-qa-qg
|
sabhi
| 2021-09-07T07:24:12Z
| 7
| 1
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-generation",
"dataset:squadv1",
"arxiv:1910.10683",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
---
datasets:
- squadv1
tags:
- question-generation
---
## T5 for multi-task QA and QG
This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/sabhi27/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/sabhi27/question_generation).
[](https://colab.research.google.com/github/sabhi27/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("multitask-qa-qg", model="sabhi/t5-base-qa-qg")
# to generate questions simply pass the text
nlp("42 is the answer to life, the universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
# for qa pass a dict with "question" and "context"
nlp({
"question": "What is 42 ?",
"context": "42 is the answer to life, the universe and everything."
})
=> 'the answer to life, the universe and everything'
```
|
huggingtweets/prageru
|
huggingtweets
| 2021-09-07T00:19:57Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/prageru/1630973993106/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1412803355164872704/XvTlBoPh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PragerU</div>
<div style="text-align: center; font-size: 14px;">@prageru</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from PragerU.
| Data | PragerU |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 336 |
| Short tweets | 492 |
| Tweets kept | 2420 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3er68epp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prageru's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lv63sll) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lv63sll/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/prageru')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/discountpicasso-dril-liam_100000
|
huggingtweets
| 2021-09-07T00:14:05Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/discountpicasso-dril-liam_100000/1630973640579/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1426930394297819137/-zzMnfJo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/980964012170121217/U6FjPH4H_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LIAM & wint & Picasso</div>
<div style="text-align: center; font-size: 14px;">@discountpicasso-dril-liam_100000</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LIAM & wint & Picasso.
| Data | LIAM | wint | Picasso |
| --- | --- | --- | --- |
| Tweets downloaded | 1962 | 3226 | 3216 |
| Retweets | 135 | 472 | 427 |
| Short tweets | 435 | 313 | 421 |
| Tweets kept | 1392 | 2441 | 2368 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w4ekve8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @discountpicasso-dril-liam_100000's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s4a755y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/discountpicasso-dril-liam_100000')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ak92501-cafe_orbitinnit-ihatesinglets
|
huggingtweets
| 2021-09-07T00:03:08Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/ak92501-cafe_orbitinnit-ihatesinglets/1630972983357/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1429115399975497731/JZdA725e_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433245625429204993/xzzFE2CJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1405992051427229698/V3W-1gOb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">✨たち Tommy’s an Orbit 🌙 たち✨ & everyone in the system this isn’t normal & AK</div>
<div style="text-align: center; font-size: 14px;">@ak92501-cafe_orbitinnit-ihatesinglets</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ✨たち Tommy’s an Orbit 🌙 たち✨ & everyone in the system this isn’t normal & AK.
| Data | ✨たち Tommy’s an Orbit 🌙 たち✨ | everyone in the system this isn’t normal | AK |
| --- | --- | --- | --- |
| Tweets downloaded | 2256 | 1151 | 3250 |
| Retweets | 1350 | 78 | 403 |
| Short tweets | 323 | 352 | 464 |
| Tweets kept | 583 | 721 | 2383 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mhwl02od/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ak92501-cafe_orbitinnit-ihatesinglets's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m05466la) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m05466la/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ak92501-cafe_orbitinnit-ihatesinglets')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/liam_100000
|
huggingtweets
| 2021-09-06T23:32:16Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/liam_100000/1630971132171/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1426930394297819137/-zzMnfJo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LIAM</div>
<div style="text-align: center; font-size: 14px;">@liam_100000</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LIAM.
| Data | LIAM |
| --- | --- |
| Tweets downloaded | 1960 |
| Retweets | 135 |
| Short tweets | 434 |
| Tweets kept | 1391 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sila7bw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @liam_100000's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bu2qvu3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bu2qvu3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/liam_100000')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/formernumber
|
huggingtweets
| 2021-09-06T21:05:59Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: https://www.huggingtweets.com/formernumber/1630962355855/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1430593525108903940/vrSks7ph_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NaN</div>
<div style="text-align: center; font-size: 14px;">@formernumber</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NaN.
| Data | NaN |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 146 |
| Short tweets | 554 |
| Tweets kept | 2550 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cmch3y4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @formernumber's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iurxhit) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iurxhit/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/formernumber')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
julien-c/dummy-for-flat
|
julien-c
| 2021-09-06T21:02:55Z
| 0
| 1
| null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z
|
in the editor i only change this line
Example of a hf.co repo containing signed commits.
hello tabs
|
yseop/FNP_T5_D2T_complete
|
yseop
| 2021-09-06T20:54:21Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z
|
# T5-base data to text model specialized for Finance NLG
__complete version__
----
## Usage (HuggingFace Transformers)
#### Call the model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yseop/FNP_T5_D2T_complete")
model = AutoModelForSeq2SeqLM.from_pretrained("yseop/FNP_T5_D2T_complete")
text = ["Group profit | valIs | € 115.7 million && € 115.7 million | dTime | in 2019"]
```
#### Choose a generation method
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
p = 0.82
k = 90
outputs = model.generate(input_ids,
do_sample=True,
top_p=p,
top_k=k,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
outputs = model.generate(input_ids,
max_length=200,
num_beams=2, repetition_penalty=2.5,
top_k=50, top_p=0.98,
length_penalty=1.0,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
**Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.