modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
β | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
β | likes
float64 0
712
β | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
abdelkader/distilbert-base-uncased-distilled-clinc | 18f0cfdeeccafc9b52cde6fa87f14189adf82b79 | 2022-01-20T05:15:31.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | abdelkader | null | abdelkader/distilbert-base-uncased-distilled-clinc | 8 | null | transformers | 13,000 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9464516129032258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3038
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.8460 | 0.7506 |
| 3.322 | 2.0 | 636 | 1.4301 | 0.8532 |
| 3.322 | 3.0 | 954 | 0.7377 | 0.9152 |
| 1.2296 | 4.0 | 1272 | 0.4784 | 0.9316 |
| 0.449 | 5.0 | 1590 | 0.3730 | 0.9390 |
| 0.449 | 6.0 | 1908 | 0.3367 | 0.9429 |
| 0.2424 | 7.0 | 2226 | 0.3163 | 0.9468 |
| 0.1741 | 8.0 | 2544 | 0.3074 | 0.9452 |
| 0.1741 | 9.0 | 2862 | 0.3054 | 0.9458 |
| 0.1501 | 10.0 | 3180 | 0.3038 | 0.9465 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
activebus/BERT-PT_rest | f263af781ec7802846a6268fbb704ab92c46aa36 | 2021-05-18T23:04:31.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | activebus | null | activebus/BERT-PT_rest | 8 | null | transformers | 13,001 | # ReviewBERT
BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects.
`BERT-DK_rest` is trained from 1G (19 types) restaurants from Yelp.
`BERT-PT_*` addtionally uses SQuAD 1.1.
## Model Description
The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus.
Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/).
## Instructions
Loading the post-trained weights are as simple as, e.g.,
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-PT_rest")
model = AutoModel.from_pretrained("activebus/BERT-PT_rest")
```
## Evaluation Results
Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf)
## Citation
If you find this work useful, please cite as following.
```
@inproceedings{xu_bert2019,
title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.",
booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
month = "jun",
year = "2019",
}
```
|
adamlin/csp | e329033e2ff223d13bbd05c1ea6802af992341a5 | 2022-06-16T16:36:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | adamlin | null | adamlin/csp | 8 | null | transformers | 13,002 | Entry not found |
adamlin/ml999_explosion_proof_electrical_equipment | 48f252bf825367cebe20c63c25a8335b43df22fe | 2021-12-20T16:56:15.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_explosion_proof_electrical_equipment | 8 | null | transformers | 13,003 | Entry not found |
adamlin/zero-shot-domain_cls | dfaf5c4290f229bea39c50927328ce598573a9d4 | 2021-07-25T14:36:59.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/zero-shot-domain_cls | 8 | null | transformers | 13,004 | Entry not found |
adelevie/distilbert-gsa-eula-opp | f02a0fc051936bda4da7fa147da8b95cebc82c32 | 2020-08-20T13:31:35.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | adelevie | null | adelevie/distilbert-gsa-eula-opp | 8 | null | transformers | 13,005 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-bert-hinglish-small | 67f42dc8905849fb97f1bd5659779068cd1fbec3 | 2021-11-26T16:53:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-bert-hinglish-small | 8 | null | transformers | 13,006 | Entry not found |
aditeyabaral/finetuned-iitp_pdt_review-xlm-roberta-base | 840ad498d3d3a205ac6197e5c47a44c7e21fa38c | 2021-11-26T06:25:06.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | aditeyabaral | null | aditeyabaral/finetuned-iitp_pdt_review-xlm-roberta-base | 8 | null | transformers | 13,007 | Entry not found |
aditeyabaral/sentencetransformer-distilbert-hinglish-big | a410e08cb6cdb689aa5e56cf3793ac7e6fc11269 | 2021-10-20T01:24:00.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | aditeyabaral | null | aditeyabaral/sentencetransformer-distilbert-hinglish-big | 8 | null | sentence-transformers | 13,008 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-distilbert-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ainize/gpt-j-6B-float16 | 9b2002d94044ae3ead003557cd916edcda516726 | 2022-01-25T05:21:23.000Z | [
"pytorch",
"gptj",
"feature-extraction",
"transformers",
"license:apache-2.0"
]
| feature-extraction | false | ainize | null | ainize/gpt-j-6B-float16 | 8 | null | transformers | 13,009 | ---
license: apache-2.0
---
Original repository : <https://huggingface.co/EleutherAI/gpt-j-6B> |
airKlizz/distilbart-multi-combine-wiki-news | 01cffdf172da4004f0af6bdd92824c13430fd9f1 | 2020-07-03T09:57:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/distilbart-multi-combine-wiki-news | 8 | null | transformers | 13,010 | Entry not found |
airKlizz/gbert-base-germeval21-toxic-with-data-augmentation | b466d466211164e3201168607d1f9d9d864f94b6 | 2021-07-13T07:26:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | airKlizz | null | airKlizz/gbert-base-germeval21-toxic-with-data-augmentation | 8 | null | transformers | 13,011 | Entry not found |
alexyalunin/RuBioRoBERTa | 4c20b9e977453e476c119991aa3e53b466ce1c4e | 2022-01-24T16:55:15.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | alexyalunin | null | alexyalunin/RuBioRoBERTa | 8 | null | transformers | 13,012 | Entry not found |
alireza7/ARMAN-SS-100-persian-base-wiki-summary | feae32629c2510f46a4fd99fec73cadd6b296482 | 2021-09-29T19:22:29.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/ARMAN-SS-100-persian-base-wiki-summary | 8 | 1 | transformers | 13,013 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/PEGASUS-persian-base-tebyan | 247a8bf8f92fce6f8cbc64efe3f0fcc27527b63f | 2021-09-29T19:25:59.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-tebyan | 8 | null | transformers | 13,014 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
alireza7/TRANSFORMER-persian-base-perkey-summary | ab3a67a8674d1744f39802a5d72c07e74874e2f5 | 2021-09-29T19:26:38.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/TRANSFORMER-persian-base-perkey-summary | 8 | null | transformers | 13,015 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
allenai/dsp_roberta_base_tapt_amazon_helpfulness_115K | 92780ffc7733ef0cea679fdc98f8817c33f51ae2 | 2021-05-20T13:22:04.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_amazon_helpfulness_115K | 8 | null | transformers | 13,016 | Entry not found |
allenai/dsp_roberta_base_tapt_sciie_3219 | 0e3b9a1a877cfaab1864bf70ecfcafc633772b54 | 2021-05-20T13:33:48.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_sciie_3219 | 8 | null | transformers | 13,017 | Entry not found |
aloxatel/7EG | 325651262c697ddb3a517ea93549026b6840f651 | 2021-05-20T13:44:27.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | aloxatel | null | aloxatel/7EG | 8 | null | transformers | 13,018 | Entry not found |
anas-awadalla/bert-medium-pretrained-on-squad | d38dd388b2bb99339ac0c73cc755bb133a839158 | 2022-01-27T03:59:02.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | anas-awadalla | null | anas-awadalla/bert-medium-pretrained-on-squad | 8 | null | transformers | 13,019 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert_medium_pretrain_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_medium_pretrain_squad
This model is a fine-tuned version of [prajjwal1/bert-medium](https://huggingface.co/prajjwal1/bert-medium) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat | 3458e96f2859bc52510c9651e79dba6a0fa72574 | 2021-09-22T20:36:06.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"dataset:mit_movie",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible"
]
| question-answering | false | andi611 | null | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat | 8 | null | transformers | 13,020 | ---
language:
- en
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
- mit_movie
model_index:
- name: bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: squad_v2
type: squad_v2
- task:
name: Token Classification
type: token-classification
dataset:
name: mit_movie
type: mit_movie
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-mit-movie-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_movie datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
anon-submission-mk/electra-base-macedonian-cased-generator | 0530f4e2600c99a43a70fc079ced58590c60d87a | 2020-09-24T12:01:12.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | anon-submission-mk | null | anon-submission-mk/electra-base-macedonian-cased-generator | 8 | null | transformers | 13,021 | Entry not found |
anton-l/wav2vec2-base-ft-common-language | d8257576534e92b7cac0b476f4e5e39e9867c61b | 2021-10-28T09:06:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
]
| audio-classification | false | anton-l | null | anton-l/wav2vec2-base-ft-common-language | 8 | null | transformers | 13,022 | Entry not found |
anurag0077/distilbert-base-uncased-finetuned-squad2 | fd443708982bd7b15a28d81e2583d7700668928f | 2021-11-05T17:19:24.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | anurag0077 | null | anurag0077/distilbert-base-uncased-finetuned-squad2 | 8 | null | transformers | 13,023 | Entry not found |
anuragshas/wav2vec2-large-xls-r-300m-bg | 689187444066b113f9c78ea3ca50a9ff78ee3d96 | 2022-03-23T18:26:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-bg | 8 | null | transformers | 13,024 | ---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Bulgarian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bg
metrics:
- name: Test WER
type: wer
value: 21.195
- name: Test CER
type: cer
value: 4.786
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 32.667
- name: Test CER
type: cer
value: 12.452
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 31.03
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Bulgarian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2473
- Wer: 0.3002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1589 | 3.48 | 400 | 3.0830 | 1.0 |
| 2.8921 | 6.96 | 800 | 2.6605 | 0.9982 |
| 1.3049 | 10.43 | 1200 | 0.5069 | 0.5707 |
| 1.1349 | 13.91 | 1600 | 0.4159 | 0.5041 |
| 1.0686 | 17.39 | 2000 | 0.3815 | 0.4746 |
| 0.999 | 20.87 | 2400 | 0.3541 | 0.4343 |
| 0.945 | 24.35 | 2800 | 0.3266 | 0.4132 |
| 0.9058 | 27.83 | 3200 | 0.2969 | 0.3771 |
| 0.8672 | 31.3 | 3600 | 0.2802 | 0.3553 |
| 0.8313 | 34.78 | 4000 | 0.2662 | 0.3380 |
| 0.8068 | 38.26 | 4400 | 0.2528 | 0.3181 |
| 0.7796 | 41.74 | 4800 | 0.2537 | 0.3073 |
| 0.7621 | 45.22 | 5200 | 0.2503 | 0.3036 |
| 0.7611 | 48.7 | 5600 | 0.2477 | 0.2991 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset mozilla-foundation/common_voice_8_0 --config bg --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-bg"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "bg", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "ΠΈ Π½Π°Π΄ΡΡΠΈΡΡ ΠΌΡ ΠΊΠ°ΡΠ° Π±Π»ΠΎΠΎΠ½ΠΊΡΡΠ΅ΠΌ Π²Π·Π΅ Π΄Π° ΡΠ΅ ΡΡΠ±ΠΈΡΠ°"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 30.07 | 21.195 |
|
anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm | b44b140fe74f71dbda9a08d62fafadc84adcc46a | 2022-03-24T11:57:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lv",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm | 8 | null | transformers | 13,025 | ---
language:
- lv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Latvian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: lv
metrics:
- name: Test WER
type: wer
value: 9.633
- name: Test CER
type: cer
value: 2.614
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: lv
metrics:
- name: Test WER
type: wer
value: 36.11
- name: Test CER
type: cer
value: 14.244
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: lv
metrics:
- name: Test WER
type: wer
value: 44.12
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Latvian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - LV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1660
- Wer: 0.1705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.489 | 2.56 | 400 | 3.3590 | 1.0 |
| 2.9903 | 5.13 | 800 | 2.9704 | 1.0001 |
| 1.6712 | 7.69 | 1200 | 0.6179 | 0.6566 |
| 1.2635 | 10.26 | 1600 | 0.3176 | 0.4531 |
| 1.0819 | 12.82 | 2000 | 0.2517 | 0.3508 |
| 1.0136 | 15.38 | 2400 | 0.2257 | 0.3124 |
| 0.9625 | 17.95 | 2800 | 0.1975 | 0.2311 |
| 0.901 | 20.51 | 3200 | 0.1986 | 0.2097 |
| 0.8842 | 23.08 | 3600 | 0.1904 | 0.2039 |
| 0.8542 | 25.64 | 4000 | 0.1847 | 0.1981 |
| 0.8244 | 28.21 | 4400 | 0.1805 | 0.1847 |
| 0.7689 | 30.77 | 4800 | 0.1736 | 0.1832 |
| 0.7825 | 33.33 | 5200 | 0.1698 | 0.1821 |
| 0.7817 | 35.9 | 5600 | 0.1758 | 0.1803 |
| 0.7488 | 38.46 | 6000 | 0.1663 | 0.1760 |
| 0.7171 | 41.03 | 6400 | 0.1636 | 0.1721 |
| 0.7222 | 43.59 | 6800 | 0.1663 | 0.1729 |
| 0.7156 | 46.15 | 7200 | 0.1633 | 0.1715 |
| 0.7121 | 48.72 | 7600 | 0.1666 | 0.1718 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config lv --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config lv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "lv", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "domΔju ka viΕam viss labi"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 16.997 | 9.633 |
|
aristotletan/roberta-base-finetuned-sst2 | 871b020ac1d321e62ee9d8bf3576e980a1ee8240 | 2021-08-02T09:50:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:scim",
"transformers",
"generated_from_trainer",
"license:mit"
]
| text-classification | false | aristotletan | null | aristotletan/roberta-base-finetuned-sst2 | 8 | null | transformers | 13,026 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- scim
metrics:
- accuracy
model_index:
- name: roberta-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: scim
type: scim
args: eod
metric:
name: Accuracy
type: accuracy
value: 0.9111111111111111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sst2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the scim dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4632
- Accuracy: 0.9111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 2.0273 | 0.6667 |
| No log | 2.0 | 180 | 0.8802 | 0.8556 |
| No log | 3.0 | 270 | 0.5908 | 0.8889 |
| No log | 4.0 | 360 | 0.4632 | 0.9111 |
| No log | 5.0 | 450 | 0.4294 | 0.9111 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
artemis13fowl/bert-finetuned-ner-accelerate | 58e4487df0d63ef880223078ec4e451f163f2392 | 2022-01-23T06:51:30.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | artemis13fowl | null | artemis13fowl/bert-finetuned-ner-accelerate | 8 | null | transformers | 13,027 | Entry not found |
asapp/sew-d-base-plus-400k-ft-ls100h | 526e765d949b6bddc9a33bc26e49232d826b2f6f | 2022-05-24T13:09:29.000Z | [
"pytorch",
"sew-d",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"audio",
"speech",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | asapp | null | asapp/sew-d-base-plus-400k-ft-ls100h | 8 | 3 | transformers | 13,028 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- speech
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: sew-d-base-plus-400k-ft-ls100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.45
---
# SEW-D-base+
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, SEWDForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load the model and preprocessor
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h")
# load the dummy dataset with speech samples
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# preprocess
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **asapp/sew-d-base-plus-400k-ft-ls100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import SEWDForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWDForCTC.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
| --- | --- |
| 4.34 | 9.45 |
|
asapp/sew-d-small-100k | 6403bf92a300c103d24ced76a8e33abb644b43a0 | 2021-10-28T14:05:24.000Z | [
"pytorch",
"sew-d",
"feature-extraction",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"transformers",
"speech",
"license:apache-2.0"
]
| feature-extraction | false | asapp | null | asapp/sew-d-small-100k | 8 | null | transformers | 13,029 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-small
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
aseifert/gelectra-large-comma | 820c18c4df8007f78e9971eb801b953120a9c095 | 2020-10-29T08:35:48.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | aseifert | null | aseifert/gelectra-large-comma | 8 | 1 | transformers | 13,030 | Entry not found |
astarostap/autonlp-antisemitism-2-21194454 | 0af58113bab2812e0ce9c1a57bff994ae4305556 | 2021-10-18T18:06:19.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:astarostap/autonlp-data-antisemitism-2",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | astarostap | null | astarostap/autonlp-antisemitism-2-21194454 | 8 | null | transformers | 13,031 | ---
tags: autonlp
language: en
widget:
- text: "the jews have a lot of power"
datasets:
- astarostap/autonlp-data-antisemitism-2
co2_eq_emissions: 2.0686690092905224
---
# Description
This model takes a tweet with the word "jew" in it, and determines if it's antisemitic.
Training data:
This model was trained on 4k tweets, where ~50% were labeled as antisemitic.
I labeled them myself based on personal experience and knowledge about common antisemitic tropes.
Note:
The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts.
Please keep in mind that I'm not an expert on antisemitism or hatespeech.
Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech.
If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected]
This model is not ready for production, it needs more evaluation and more training data.
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 21194454
- CO2 Emissions (in grams): 2.0686690092905224
- Dataset: https://huggingface.co/datasets/astarostap/autonlp-data-antisemitism-2
## Validation Metrics
- Loss: 0.5291365385055542
- Accuracy: 0.7572692793931732
- Precision: 0.7126948775055679
- Recall: 0.835509138381201
- AUC: 0.8185826549941126
- F1: 0.7692307692307693
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/astarostap/autonlp-antisemitism-2-21194454
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("astarostap/autonlp-antisemitism-2-21194454", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
auday/paraphraser_model1 | f95ebaa250541b59dcee594caf9e33f57554b3e7 | 2021-06-23T11:29:03.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | auday | null | auday/paraphraser_model1 | 8 | null | transformers | 13,032 | This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using:
- Para_NMT_50M_Paraphrasing_train_small.csv 134337 lines of pair sentences 19Mbytes
- Para_NMT_50M_Paraphrasing_val_small.csv 14928 lines of pair sentences 2.0Mbytes
Training Start Time: Sun Mar 14 18:27:15 2021
Training End Time: Sun Mar 14 22:19:00 2021
|
ayameRushia/wav2vec2-large-xls-r-300m-id | eab14d1deae6e02517ac5c93e7d0ce522f4e72e2 | 2022-01-31T06:24:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | ayameRushia | null | ayameRushia/wav2vec2-large-xls-r-300m-id | 8 | null | transformers | 13,033 | ---
language:
- id
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: 'XLS-R-300M - Indonesia'
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 38.098
- name: Test CER
type: cer
value: 14.261
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ID dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3975
- Wer: 0.2633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.78 | 100 | 4.5645 | 1.0 |
| No log | 1.55 | 200 | 2.9016 | 1.0 |
| No log | 2.33 | 300 | 2.2666 | 1.0982 |
| No log | 3.1 | 400 | 0.6079 | 0.6376 |
| 3.2188 | 3.88 | 500 | 0.4985 | 0.5008 |
| 3.2188 | 4.65 | 600 | 0.4477 | 0.4469 |
| 3.2188 | 5.43 | 700 | 0.3953 | 0.3915 |
| 3.2188 | 6.2 | 800 | 0.4319 | 0.3921 |
| 3.2188 | 6.98 | 900 | 0.4171 | 0.3698 |
| 0.2193 | 7.75 | 1000 | 0.3957 | 0.3600 |
| 0.2193 | 8.53 | 1100 | 0.3730 | 0.3493 |
| 0.2193 | 9.3 | 1200 | 0.3780 | 0.3348 |
| 0.2193 | 10.08 | 1300 | 0.4133 | 0.3568 |
| 0.2193 | 10.85 | 1400 | 0.3984 | 0.3193 |
| 0.1129 | 11.63 | 1500 | 0.3845 | 0.3174 |
| 0.1129 | 12.4 | 1600 | 0.3882 | 0.3162 |
| 0.1129 | 13.18 | 1700 | 0.3982 | 0.3008 |
| 0.1129 | 13.95 | 1800 | 0.3902 | 0.3198 |
| 0.1129 | 14.73 | 1900 | 0.4082 | 0.3237 |
| 0.0765 | 15.5 | 2000 | 0.3732 | 0.3126 |
| 0.0765 | 16.28 | 2100 | 0.3893 | 0.3001 |
| 0.0765 | 17.05 | 2200 | 0.4168 | 0.3083 |
| 0.0765 | 17.83 | 2300 | 0.4193 | 0.3044 |
| 0.0765 | 18.6 | 2400 | 0.4006 | 0.3013 |
| 0.0588 | 19.38 | 2500 | 0.3836 | 0.2892 |
| 0.0588 | 20.16 | 2600 | 0.3761 | 0.2903 |
| 0.0588 | 20.93 | 2700 | 0.3895 | 0.2930 |
| 0.0588 | 21.71 | 2800 | 0.3885 | 0.2791 |
| 0.0588 | 22.48 | 2900 | 0.3902 | 0.2891 |
| 0.0448 | 23.26 | 3000 | 0.4200 | 0.2849 |
| 0.0448 | 24.03 | 3100 | 0.4013 | 0.2799 |
| 0.0448 | 24.81 | 3200 | 0.4039 | 0.2731 |
| 0.0448 | 25.58 | 3300 | 0.3970 | 0.2647 |
| 0.0448 | 26.36 | 3400 | 0.4081 | 0.2690 |
| 0.0351 | 27.13 | 3500 | 0.4090 | 0.2674 |
| 0.0351 | 27.91 | 3600 | 0.3953 | 0.2663 |
| 0.0351 | 28.68 | 3700 | 0.4044 | 0.2650 |
| 0.0351 | 29.46 | 3800 | 0.3969 | 0.2646 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
baykenney/bert-base-gpt2detector-random | d6d2454a2f2459ea1a881beb95b157b821d8071e | 2021-05-19T12:09:16.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | baykenney | null | baykenney/bert-base-gpt2detector-random | 8 | null | transformers | 13,034 | Entry not found |
beatrice-portelli/DiLBERT | 6bf28b878e57fef9817149cf19b5b65f43a4c28b | 2021-11-30T16:00:18.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"en",
"transformers",
"medical",
"disease",
"classification",
"autotrain_compatible"
]
| fill-mask | false | beatrice-portelli | null | beatrice-portelli/DiLBERT | 8 | null | transformers | 13,035 | ---
language:
- en
tags:
- medical
- disease
- classification
---
# DiLBERT (Disease Language BERT)
The objective of this model was to obtain a specialized disease-related language, trained **from scratch**. <br>
We created a pre-training corpora starting from **ICD-11** entities, and enriched it with documents from **PubMed** and **Wikipedia** related to the same entities. <br>
Results of finetuning show that DiLBERT leads to comparable or higher accuracy scores on various classification tasks compared with other general-purpose or in-domain models (e.g., BioClinicalBERT, RoBERTa, XLNet).
Model released with the paper "**DiLBERT: Cheap Embeddings for Disease Related Medical NLP**". <br>
To summarize the practical implications of our work: we pre-trained and fine-tuned a domain specific BERT model on a small corpora, with comparable or better performance than state-of-the-art models.
This approach may also simplify the development of models for languages different from English, due to the minor quantity of data needed for training.
### Composition of the pretraining corpus
| Source | Documents | Words |
|---|---:|---:|
| ICD-11 descriptions | 34,676 | 1.0 million |
| PubMed Title and Abstracts | 852,550 | 184.6 million |
| Wikipedia pages | 37,074 | 6.1 million |
### Main repository
For more details check the main repo https://github.com/KevinRoitero/dilbert
# Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("beatrice-portelli/DiLBERT")
model = AutoModelForMaskedLM.from_pretrained("beatrice-portelli/DiLBERT")
```
# How to cite
```
@article{roitero2021dilbert,
title={{DilBERT}: Cheap Embeddings for Disease Related Medical NLP},
author={Roitero, Kevin and Portelli, Beatrice and Popescu, Mihai Horia and Della Mea, Vincenzo},
journal={IEEE Access},
volume={},
pages={},
year={2021},
publisher={IEEE},
note = {In Press}
}
```
|
beomi/beep-kcbert-base-bias | ecb21a8b31cd776376f7bca11fdf08f450c43a66 | 2021-10-23T06:22:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | beomi | null | beomi/beep-kcbert-base-bias | 8 | null | transformers | 13,036 | Entry not found |
beomi/beep-klue-roberta-base-bias | fd8d9f5b21445124bd3aa12b570f76347657597c | 2021-10-23T06:13:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | beomi | null | beomi/beep-klue-roberta-base-bias | 8 | null | transformers | 13,037 | Entry not found |
beomi/beep-koelectra-base-v3-discriminator-bias | 2f9b0cec2de0e1d087996e0f455ec40200a1f8ff | 2021-10-23T06:14:46.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | beomi | null | beomi/beep-koelectra-base-v3-discriminator-bias | 8 | null | transformers | 13,038 | Entry not found |
beomi/kcbert-large-dev | ce1a3f63ac590f0625fd4f22d4194b380e277dab | 2021-05-19T12:31:44.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | beomi | null | beomi/kcbert-large-dev | 8 | null | transformers | 13,039 | Entry not found |
bhavikardeshna/multilingual-bert-base-cased-english | a21f3ddf498acb30f5f86724d2ef0b2b9dd6af35 | 2021-12-21T11:42:34.000Z | [
"pytorch",
"bert",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bhavikardeshna | null | bhavikardeshna/multilingual-bert-base-cased-english | 8 | null | transformers | 13,040 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
boychaboy/MNLI_bert-base-uncased | 654a63b78e264fdd5dd09bb6b0c7b11c69123186 | 2021-05-19T13:15:43.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/MNLI_bert-base-uncased | 8 | null | transformers | 13,041 | Entry not found |
boychaboy/MNLI_distilroberta-base | c60bad81ea94a889060d2a5b840a57eafabd5934 | 2021-05-20T14:30:07.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/MNLI_distilroberta-base | 8 | null | transformers | 13,042 | Entry not found |
boychaboy/MNLI_roberta-large | da55162754f8570ce3eab5bdbdd6f8b8019b3d12 | 2021-05-20T14:33:21.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/MNLI_roberta-large | 8 | null | transformers | 13,043 | Entry not found |
boychaboy/kobias_v2_klue-roberta-base | 032c4f2ebc147fabd4b910d9f362979bec101c09 | 2021-07-11T15:56:52.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/kobias_v2_klue-roberta-base | 8 | null | transformers | 13,044 | Entry not found |
brcps12/bert-base-finetuned-sts | a6a33f75e5b77b52e55b9dda6c24cefe9a83bbdf | 2022-01-05T17:03:08.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | brcps12 | null | brcps12/bert-base-finetuned-sts | 8 | null | transformers | 13,045 | Entry not found |
byteb/DialoGPT-small-hades | 0a254ba3357c0d4ed85ccfc7c90391e51a104585 | 2021-06-06T11:50:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | byteb | null | byteb/DialoGPT-small-hades | 8 | null | transformers | 13,046 | Entry not found |
canwenxu/evil_gpt2 | c6895a78f8aea74c0422321ec37041f342ee3ee9 | 2021-05-21T14:44:54.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | canwenxu | null | canwenxu/evil_gpt2 | 8 | null | transformers | 13,047 | **It's for testing use. Don't use it in your project ;)** |
cardiffnlp/twitter-roberta-base-stance-atheism | f6fd6402c912431bffbc9b10014a4f6e0839a1db | 2021-05-20T15:08:50.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-stance-atheism | 8 | null | transformers | 13,048 | |
celtics1863/env-bert-large-chinese | 989a080c7ba03c00db15defbd615da144e408c6f | 2021-11-09T11:10:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | celtics1863 | null | celtics1863/env-bert-large-chinese | 8 | null | transformers | 13,049 | Entry not found |
cestwc/roberta-base-bigram-binary | fb83d34e21c994b06e3fb552fb96a01f22ce9987 | 2021-12-05T19:03:07.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | cestwc | null | cestwc/roberta-base-bigram-binary | 8 | null | transformers | 13,050 | Entry not found |
charsiu/en_w2v2_fs_10ms | 72d5cc831f141cd15b3c9921a8f12a3c62f422ad | 2021-10-02T22:35:03.000Z | [
"pytorch",
"wav2vec2",
"transformers"
]
| null | false | charsiu | null | charsiu/en_w2v2_fs_10ms | 8 | null | transformers | 13,051 | Entry not found |
chinhon/pegasus-large-commentaries_hd | b6c150fa79deee4f2b84a4a89d5fec67e293948f | 2022-01-15T14:43:29.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | chinhon | null | chinhon/pegasus-large-commentaries_hd | 8 | null | transformers | 13,052 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-large-commentaries_hd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-large-commentaries_hd
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5453
- Rouge1: 26.3475
- Rouge2: 9.5095
- Rougel: 22.6367
- Rougelsum: 22.8127
- Gen Len: 14.4789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.5718 | 1.0 | 4710 | 2.5277 | 25.1384 | 8.6528 | 21.3443 | 21.5289 | 15.3268 |
| 2.4034 | 2.0 | 9420 | 2.4973 | 25.9298 | 9.2238 | 22.3192 | 22.4817 | 14.2243 |
| 2.2093 | 3.0 | 14130 | 2.5013 | 26.6036 | 9.7482 | 22.8409 | 23.0077 | 14.2263 |
| 2.0518 | 4.0 | 18840 | 2.5272 | 26.4723 | 9.6599 | 22.7439 | 22.9201 | 14.38 |
| 1.9906 | 5.0 | 23550 | 2.5453 | 26.3475 | 9.5095 | 22.6367 | 22.8127 | 14.4789 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
chitra/finetuned-adversarial-paraphrase-model-test | 4492bfeb88822f3daf47dc675474158eeeaa1429 | 2022-01-19T07:45:23.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | chitra | null | chitra/finetuned-adversarial-paraphrase-model-test | 8 | null | transformers | 13,053 | Entry not found |
chitra/finetuned-adversarial-paraphrase-model | c51758c977f9862d6f5fb8d05737dfc2234855c9 | 2022-01-19T09:13:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | chitra | null | chitra/finetuned-adversarial-paraphrase-model | 8 | null | transformers | 13,054 | ---
tags:
- generated_from_trainer
model-index:
- name: finetuned-adversarial-paraphrase-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-adversarial-paraphrase-model
This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0848 | 1.0 | 2000 | 5.4633 |
| 0.0495 | 2.0 | 4000 | 6.0352 |
| 0.0121 | 3.0 | 6000 | 7.5680 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
chrommium/two-step-finetuning-sbert | b55007beea40eb438c13d50b2f43b3e485a2eb90 | 2021-11-23T21:29:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | chrommium | null | chrommium/two-step-finetuning-sbert | 8 | null | transformers | 13,055 | Entry not found |
clem/autonlp-test3-2101787 | 984c2e1fdba7bbf4db373d794003fea78735ee57 | 2021-06-29T04:32:06.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:clem/autonlp-data-test3",
"transformers",
"autonlp"
]
| text-classification | false | clem | null | clem/autonlp-test3-2101787 | 8 | null | transformers | 13,056 | ---
tags: autonlp
language: en
widget:
- text: "this can wait"
datasets:
- clem/autonlp-data-test3
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification Urgent/Not Urgent
## Validation Metrics
- Loss: 0.08956164121627808
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101787
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101787", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101787", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
datawhales/korean-relation-extraction | 0656b00f215a24761d3193ac12193c3792169b44 | 2021-12-03T11:32:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | datawhales | null | datawhales/korean-relation-extraction | 8 | null | transformers | 13,057 | Entry not found |
dbmdz/electra-base-turkish-mc4-cased-discriminator | a502165dafbf5c3c4afdbd63273865f3e823af9d | 2021-09-23T10:44:30.000Z | [
"pytorch",
"tf",
"tensorboard",
"electra",
"pretraining",
"tr",
"dataset:allenai/c4",
"transformers",
"license:mit"
]
| null | false | dbmdz | null | dbmdz/electra-base-turkish-mc4-cased-discriminator | 8 | null | transformers | 13,058 | ---
language: tr
license: mit
datasets:
- allenai/c4
---
# πΉπ· Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish π
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (cased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with π€/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC β€οΈ
|
denpa92/bert-base-cantonese | 53168b1e97864332c79e1c9496eb65f2db1c795f | 2021-05-19T15:37:31.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| null | false | denpa92 | null | denpa92/bert-base-cantonese | 8 | null | transformers | 13,059 | Entry not found |
diegorossi/distilbert-base-uncased-finetuned-sst2 | bd9e5cca72517fb5b4e5c91582325dad9b942d01 | 2021-09-17T19:51:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | diegorossi | null | diegorossi/distilbert-base-uncased-finetuned-sst2 | 8 | null | transformers | 13,060 | Entry not found |
diegozs97/finetuned-sciie-seed-4-100k | b11246f3c19875b6392495d843773d46bb0e7539 | 2021-12-10T01:52:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-100k | 8 | null | transformers | 13,061 | Entry not found |
diegozs97/finetuned-sciie-seed-4-20k | fec75f1848fec242a48c45974b6a942f237447dd | 2021-12-10T01:50:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-20k | 8 | null | transformers | 13,062 | Entry not found |
dpetrini/t5-small-finetuned-ro-to-en | 6fcf3865d8cdfb3dbbbe82dbae4fee6a9809b9ac | 2021-12-02T23:08:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | dpetrini | null | dpetrini/t5-small-finetuned-ro-to-en | 8 | null | transformers | 13,063 | Entry not found |
echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid | 445de53edd63ee339f124e2124f7dab028a55d2a | 2021-07-15T13:11:02.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qqp",
"transformers",
"license:apache-2.0"
]
| text-classification | false | echarlaix | null | echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid | 8 | null | transformers | 13,064 | ---
language: en
license: apache-2.0
tags:
- text-classification
datasets:
- qqp
metrics:
- F1
---
## bert-base-uncased model fine-tuned on QQP
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **36%** of the original weights.
The model contains **50%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
<div class="graph"><script src="/echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid/raw/main/model_card/density_info.js" id="70162e64-2a82-4147-ac7a-864cfe18a013"></script></div>
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on task, and distilled from the model [textattack/bert-base-uncased-QQP](https://huggingface.co/textattack/bert-base-uncased-QQP).
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of block pruning is that some of the attention heads are completely removed: 54 heads were removed on a total of 144 (37.5%).
<div class="graph"><script src="/echarlaix/bert-base-uncased-qqp-f87.8-d36-hybrid/raw/main/model_card/pruning_info.js" id="f4fb8229-3e66-406e-b99f-f771ce6117c8"></script></div>
## Details of the QQP dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| QQP | train | 364K |
| QQP | eval | 40K |
### Results
**Pytorch model file size**: `377MB` (original BERT: `420MB`)
| Metric | # Value |
| ------ | --------- |
| **F1** | **87.87** |
|
ehdwns1516/klue-roberta-base_sae | 622a7e92211f2200f986579dc94d647042c93be5 | 2021-08-18T11:31:20.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | ehdwns1516 | null | ehdwns1516/klue-roberta-base_sae | 8 | null | transformers | 13,065 | # klue-roberta-base-sae
* This model trained with Korean dataset.
* Input sentence what you want to grasp intent.
* You can use English, but don't expect accuracy.
klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/)
klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/KLUE-RoBERTa-base_sae)
## Overview
Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base)
Language: Korean
Training data: [kor_sae](https://huggingface.co/datasets/kor_sae)
Eval data: [kor_sae](https://huggingface.co/datasets/kor_sae)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/KLUE-RoBERTa-base_sae_notebook)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-sae")
classifier = pipeline(
"text-classification",
model="ehdwns1516/klue-roberta-base-kornli",
return_all_scores=True,
)
context = "sentence what you want to grasp intent"
result = dict()
result[0] = classifier(context)[0]
```
|
emfa/danish-bert-botxo-danish-finetuned-hatespeech | d23353cc044d1c3a658603decf5f40bf0b9163c7 | 2021-12-06T11:14:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index"
]
| text-classification | false | emfa | null | emfa/danish-bert-botxo-danish-finetuned-hatespeech | 8 | null | transformers | 13,066 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: danish-bert-botxo-danish-finetuned-hatespeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# danish-bert-botxo-danish-finetuned-hatespeech
This model is for a university project and is uploaded for sharing between students. It is training on a danish hate speech labeled training set. Feel free to use it, but as of now, we don't promise any good results ;-)
This model is a fine-tuned version of [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 315 | 0.3285 |
| 0.2879 | 2.0 | 630 | 0.3288 |
| 0.2879 | 3.0 | 945 | 0.3178 |
| 0.1371 | 4.0 | 1260 | 0.3584 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
emre/arxiv27k-t5-abst-title-gen | 7f2e550ad50a1e19555ee40fcf1b1785ae4fe967 | 2022-01-22T15:18:01.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | emre | null | emre/arxiv27k-t5-abst-title-gen | 8 | null | transformers | 13,067 | ---
license: apache-2.0
tags:
- generated_from_trainer
- summarization
metrics:
- rouge
model-index:
- name: arxiv27k-t5-abst-title-gen/
results: []
---
# arxiv27k-t5-abst-title-gen/
This model is a fine-tuned version of mt5-small on the arxiv-abstract-title dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6002
- Rouge1: 32.8
- Rouge2: 21.9
- Rougel: 34.8
-
## Model description
Model has been trained with a colab-pro notebook in 4 hours.
## Intended uses & limitations
Can be used for generating journal titles from given abstracts
### Training args
model_args = T5Args()
model_args.max_seq_length = 256
model_args.train_batch_size = 8
model_args.eval_batch_size = 8
model_args.num_train_epochs = 6
model_args.evaluate_during_training = False
model_args.use_multiprocessing = False
model_args.fp16 = False
model_args.save_steps = 40000
model_args.save_eval_checkpoints = False
model_args.save_model_every_epoch = True
model_args.output_dir = OUTPUT_DIR
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
model_args.num_return_sequences = 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
### Contact
[email protected]
Davut Emre TaΕar |
emrecan/bert-base-turkish-cased-multinli_tr | 1275f728c5d696d6a91685ccce401371aafa5e37 | 2021-12-01T10:45:51.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
]
| zero-shot-classification | false | emrecan | null | emrecan/bert-base-turkish-cased-multinli_tr | 8 | null | transformers | 13,068 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yΓΌkselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo Γ§ok saΓ§maydΔ±, beΔendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/convbert-base-turkish-mc4-cased-snli_tr | bb678f091a24f718efea6be62a5ae452f7bbe7be | 2021-12-01T19:43:30.000Z | [
"pytorch",
"convbert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
]
| zero-shot-classification | false | emrecan | null | emrecan/convbert-base-turkish-mc4-cased-snli_tr | 8 | null | transformers | 13,069 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yΓΌkselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo Γ§ok saΓ§maydΔ±, beΔendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
ensamblador/gpt2-derecha-with-bos-eos-8heads | ce06114849903453743feedfd189c08a6ce1e740 | 2021-05-21T15:50:53.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ensamblador | null | ensamblador/gpt2-derecha-with-bos-eos-8heads | 8 | null | transformers | 13,070 | Entry not found |
erwanlc/t5-coktails_recipe-small | 9030657f28c7d927fdb063c13b695b3707a90555 | 2022-01-14T14:32:10.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | erwanlc | null | erwanlc/t5-coktails_recipe-small | 8 | null | transformers | 13,071 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-coktails_recipe-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-coktails_recipe-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
facebook/wav2vec2-base-nl-voxpopuli | fd6217d13ae32bce0bc8454aeca26aa73653164b | 2021-07-06T01:55:08.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"nl",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
]
| automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-nl-voxpopuli | 8 | null | transformers | 13,072 | ---
language: nl
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the nl unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
flax-community/swe-roberta-wiki-oscar | 2a0742740f309be3400448a652aa155426fd0d52 | 2021-09-23T13:54:25.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"sv",
"transformers",
"swedish",
"license:cc-by-4.0",
"fill-mask"
]
| fill-mask | false | flax-community | null | flax-community/swe-roberta-wiki-oscar | 8 | null | transformers | 13,073 | ---
language: sv
license: cc-by-4.0
tags:
- swedish
- roberta
pipeline_tag: fill-mask
widget:
- text: Meninged med livet Γ€r <mask>.
---
# Swe Roberta Wiki Oscar
## Description
This Roberta model was trained on the Swedish Wikipedia and Oscar datasets
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
|
gchhablani/fnet-large-finetuned-cola-copy5 | da84f34242bd687f59e8fc518e916a02ae716ae6 | 2021-10-10T20:37:34.000Z | [
"pytorch",
"tensorboard",
"fnet",
"text-classification",
"transformers"
]
| text-classification | false | gchhablani | null | gchhablani/fnet-large-finetuned-cola-copy5 | 8 | null | transformers | 13,074 | Entry not found |
ghadeermobasher/BC5CDR-Imbalanced-PubMedBERT | 2487cc014a1c3c094b50f7294823bb4dcb064a5c | 2022-01-21T19:38:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Imbalanced-PubMedBERT | 8 | null | transformers | 13,075 | Entry not found |
ghadeermobasher/BCHEM4-Modified-BioBERT-v1 | 799ff8a4fb9e831c87f8d2d47aad1dbe0c1e55ee | 2022-02-04T07:43:57.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BCHEM4-Modified-BioBERT-v1 | 8 | null | transformers | 13,076 | Entry not found |
giacomomiolo/electramed_small | cb8dbaf0b7ce7b5438f1bf2756a57be18aaa212f | 2020-09-03T22:48:14.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"transformers"
]
| null | false | giacomomiolo | null | giacomomiolo/electramed_small | 8 | null | transformers | 13,077 | Entry not found |
glasses/resnet50 | 761c66bf309109ad629044fa4d731e6b9de5f290 | 2021-11-30T20:09:35.000Z | [
"pytorch",
"dataset:imagenet",
"arxiv:1512.03385",
"arxiv:1812.01187",
"transformers",
"image-classification",
"license:apache-2.0"
]
| image-classification | false | glasses | null | glasses/resnet50 | 8 | null | transformers | 13,078 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet
---
# resnet50
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
gogamza/kobert-legalqa-v1 | 5bd0216c45e2640265804ab86fa63e2bb22dd3d4 | 2021-07-27T09:16:59.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
]
| null | false | gogamza | null | gogamza/kobert-legalqa-v1 | 8 | 1 | transformers | 13,079 | Please refer : https://github.com/haven-jeon/LegalQA#train |
gonced8/pegasus-conversational-qa | 0d097093bad5f1ceb243875c73e5c0927982c1b2 | 2022-02-14T11:17:45.000Z | [
"pytorch",
"tf",
"pegasus",
"text2text-generation",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | gonced8 | null | gonced8/pegasus-conversational-qa | 8 | null | transformers | 13,080 | ---
license: gpl-3.0
---
# rachael-scai
Generation model (Pegasus fine-tuned with QReCC) used in the participation of group Rachael for SCAI 2021.
GitHub repository can be found in: [gonced8/rachael-scai](https://github.com/gonced8/rachael-scai)
GonΓ§alo Raposo
## Cite
```bibtex
@InProceedings{Raposo2022,
author = {GonΓ§alo Raposo and Rui Ribeiro and Bruno Martins and LuΓsa Coheur},
booktitle = {44th European Conference on Information Retrieval},
title = {Question rewriting? Assessing its importance for conversational question answering},
year = {2022},
month = apr,
note = {This version of the contribution has been accepted for publication, after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/[not yet available]. Use of this Accepted Version is subject to the publisherβs Accepted Manuscript terms of use \url{https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms}},
abstract = {In conversational question answering, systems must correctly interpret the interconnected interactions and generate knowledgeable answers, which may require the retrieval of relevant information from a background repository. Recent approaches to this problem leverage neural language models, although different alternatives can be considered in terms of modules for (a) representing user questions in context, (b) retrieving the relevant background information, and (c) generating the answer. This work presents a conversational question answering system designed specifically for the Search-Oriented Conversational AI (SCAI) shared task, and reports on a detailed analysis of its question rewriting module. In particular, we considered different variations of the question rewriting module to evaluate the influence on the subsequent components, and performed a careful analysis of the results obtained with the best system configuration. Our system achieved the best performance in the shared task and our analysis emphasizes the importance of the conversation context representation for the overall system performance.},
keywords = {conversational question answering, conversational search, question rewriting, transformer-based neural language models},
}
```
|
google/t5-efficient-large-nh24 | d62151a6d071eb3b4871f63b984417e8ec936a9d | 2022-02-15T10:57:31.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-large-nh24 | 8 | 1 | transformers | 13,081 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-LARGE-NH24 (Deep-Narrow version)
T5-Efficient-LARGE-NH24 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the modelβs depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-large-nh24** - is of model type **Large** with the following variations:
- **nh** is **24**
It has **888.72** million parameters and thus requires *ca.* **3554.88 MB** of memory in full precision (*fp32*)
or **1777.44 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-small-dm256 | e1a2da3e780881f5eab471826c36faae19823749 | 2022-02-15T10:56:43.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-dm256 | 8 | null | transformers | 13,082 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-DM256 (Deep-Narrow version)
T5-Efficient-SMALL-DM256 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the modelβs depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-dm256** - is of model type **Small** with the following variations:
- **dm** is **256**
It has **30.27** million parameters and thus requires *ca.* **121.07 MB** of memory in full precision (*fp32*)
or **60.54 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-xl-nl8 | bfaab075a71e231028177e6effb8c3edb4fd94eb | 2022-02-15T10:51:59.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-xl-nl8 | 8 | null | transformers | 13,083 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-XL-NL8 (Deep-Narrow version)
T5-Efficient-XL-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the modelβs depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xl-nl8** - is of model type **Xl** with the following variations:
- **nl** is **8**
It has **972.49** million parameters and thus requires *ca.* **3889.95 MB** of memory in full precision (*fp32*)
or **1944.97 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-xxl-ssm-nq | bc967e5eec0c81987521fae49492a70accaeb3f6 | 2020-12-07T08:41:20.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-xxl-ssm-nq | 8 | null | transformers | 13,084 | ---
language: en
datasets:
- c4
- wikipedia
- natural_questions
pipeline_tag: text2text-generation
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-small|https://huggingface.co/google/t5-small-ssm-nq|25.5|
|T5-large|https://huggingface.co/google/t5-large-ssm-nq|30.4|
|T5-xl|https://huggingface.co/google/t5-xl-ssm-nq|35.6|
|**T5-xxl**|**https://huggingface.co/google/t5-xxl-ssm-nq**|**37.9**|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nq|33.2|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nq|36.6|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xxl-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xxl-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
guilhermedrud/bert-large-portuguese-socioambiental | 7a5d78d96f65c1b0b0444b0f6258371c5a982785 | 2021-09-17T20:17:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | guilhermedrud | null | guilhermedrud/bert-large-portuguese-socioambiental | 8 | null | transformers | 13,085 | Entry not found |
gyre/200wordrpgmodel | 4a90413248cb6acd8f7ada8eea497368e345afaf | 2021-05-23T17:53:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | gyre | null | gyre/200wordrpgmodel | 8 | null | transformers | 13,086 | |
harish/PT-UP-xlmR-FewShot-FalseTrue-0_0_BEST | 69882d5a01be12fe21579be70e4ff287beae2fee | 2021-06-28T15:48:17.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | harish | null | harish/PT-UP-xlmR-FewShot-FalseTrue-0_0_BEST | 8 | null | transformers | 13,087 | Entry not found |
healx/biomedical-dpr-qry-encoder | bb8eeb3de56597c4dc5d7050dbfe7381935c8525 | 2021-11-11T10:35:32.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2109.08564",
"transformers"
]
| feature-extraction | false | healx | null | healx/biomedical-dpr-qry-encoder | 8 | null | transformers | 13,088 | DPR query encoder for Biomedical slot filling see https://arxiv.org/abs/2109.08564 for details.
Load with:
```python
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizerFast
qry_encoder = DPRQuestionEncoder.from_pretrained('healx/biomedical-dpr-qry-encoder')
qry_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
``` |
hf-internal-testing/tiny-random-m2m_100 | bd7eeddf7f4a792ec7eeca16de1b550959a3c8d9 | 2022-04-12T03:41:28.000Z | [
"pytorch",
"m2m_100",
"transformers"
]
| null | false | hf-internal-testing | null | hf-internal-testing/tiny-random-m2m_100 | 8 | null | transformers | 13,089 | Entry not found |
hfl/chinese-legal-electra-small-discriminator | de784afeb15b057b8b5319da520ac1150271894b | 2021-01-22T05:19:55.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0"
]
| null | false | hfl | null | hfl/chinese-legal-electra-small-discriminator | 8 | 1 | transformers | 13,090 | ---
language:
- zh
license: "apache-2.0"
---
# This model is specifically designed for legal domain.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
huggingartists/coldplay | 54bdc430a110830b52fc9831ad72f2d9a52c904a | 2022-07-15T17:48:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/coldplay",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/coldplay | 8 | null | transformers | 13,091 | ---
language: en
datasets:
- huggingartists/coldplay
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/6cfcc2b1425286fe0d0b8c857c895b63.600x338x200.gif')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ HuggingArtists Model π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Coldplay</div>
<a href="https://genius.com/artists/coldplay">
<div style="text-align: center; font-size: 14px;">@coldplay</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Coldplay.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/coldplay).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/coldplay")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/34tqcy7u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Coldplay's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/23h7o09h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/23h7o09h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/coldplay')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/coldplay")
model = AutoModelWithLMHead.from_pretrained("huggingartists/coldplay")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/elton-john | d75c9e287efb1809daab63254c7b869821fc2e3f | 2022-06-06T10:32:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/elton-john",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/elton-john | 8 | null | transformers | 13,092 | ---
language: en
datasets:
- huggingartists/elton-john
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/ec76d346c4c8b057169194c1781021fd.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ HuggingArtists Model π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elton John</div>
<a href="https://genius.com/artists/elton-john">
<div style="text-align: center; font-size: 14px;">@elton-john</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Elton John.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/elton-john).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/elton-john")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/188xpm2n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Elton John's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1rgstntu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1rgstntu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/elton-john')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/elton-john")
model = AutoModelWithLMHead.from_pretrained("huggingartists/elton-john")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/lil-nas-x | b334d7a39ccc8f3a0136d05aa7c6c5d44283bab7 | 2021-09-02T20:06:24.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/lil-nas-x",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/lil-nas-x | 8 | null | transformers | 13,093 | ---
language: en
datasets:
- huggingartists/lil-nas-x
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f50e1ac333da1f744f98eec38e44dd29.640x640x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ HuggingArtists Model π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lil Nas X</div>
<a href="https://genius.com/artists/lil-nas-x">
<div style="text-align: center; font-size: 14px;">@lil-nas-x</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Lil Nas X.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/lil-nas-x).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/lil-nas-x")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/n5s2tj7p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lil Nas X's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/334lnf7p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/334lnf7p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/lil-nas-x')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/lil-nas-x")
model = AutoModelWithLMHead.from_pretrained("huggingartists/lil-nas-x")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/loud-luxury | b1ce0599e6d14dfe18884d2777df08c54d0a9620 | 2021-09-12T03:29:59.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/loud-luxury",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/loud-luxury | 8 | null | transformers | 13,094 | ---
language: en
datasets:
- huggingartists/loud-luxury
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/6aa21ea8658908051e15b8d7808b5196.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ HuggingArtists Model π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Loud Luxury</div>
<a href="https://genius.com/artists/loud-luxury">
<div style="text-align: center; font-size: 14px;">@loud-luxury</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Loud Luxury.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/loud-luxury).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/loud-luxury")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2a6kq74a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Loud Luxury's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2l3op3mf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2l3op3mf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/loud-luxury')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/loud-luxury")
model = AutoModelWithLMHead.from_pretrained("huggingartists/loud-luxury")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingface-course/mt5-finetuned-amazon-en-es-accelerate | df5b75dcf73a9fd8093f0e569d0f0210db03239a | 2021-10-06T10:19:02.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | huggingface-course | null | huggingface-course/mt5-finetuned-amazon-en-es-accelerate | 8 | null | transformers | 13,095 | Entry not found |
huggingtweets/_tinyflower | 6c5b0f09fdcfdac554f0b1ba41acad35f20c22f6 | 2021-05-21T17:17:43.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/_tinyflower | 8 | null | transformers | 13,096 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1322810236500025348/n4DEuDvs_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">flappy fawn πΈπΌ π€ AI Bot </div>
<div style="font-size: 15px">@_tinyflower bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@_tinyflower's tweets](https://twitter.com/_tinyflower).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3185 |
| Retweets | 2019 |
| Short tweets | 181 |
| Tweets kept | 985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4osh65pp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_tinyflower's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3237hlmg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3237hlmg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_tinyflower')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/actiongeologist | d74de07f11f4a8fb3825cad2681a10b026b47dd5 | 2021-05-21T17:32:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/actiongeologist | 8 | null | transformers | 13,097 | ---
language: en
thumbnail: https://www.huggingtweets.com/actiongeologist/1617468825652/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1322985945960902656/2dAh5NDP_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Lydia π€ AI Bot </div>
<div style="font-size: 15px">@actiongeologist bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@actiongeologist's tweets](https://twitter.com/actiongeologist).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1062 |
| Retweets | 31 |
| Short tweets | 81 |
| Tweets kept | 950 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/b7gw8mp3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @actiongeologist's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/327hbgyu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/327hbgyu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/actiongeologist')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ahmedallibhoy | 2a8820bc6c8feedb51f2ca7cafdb7cab4ad83cc8 | 2021-05-21T17:53:42.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/ahmedallibhoy | 8 | null | transformers | 13,098 | ---
language: en
thumbnail: https://www.huggingtweets.com/ahmedallibhoy/1616643813999/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1297351407809380352/gW1wWpRv_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ahmed π€ AI Bot </div>
<div style="font-size: 15px">@ahmedallibhoy bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ahmedallibhoy's tweets](https://twitter.com/ahmedallibhoy).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 226 |
| Retweets | 82 |
| Short tweets | 1 |
| Tweets kept | 143 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6cjgzd9a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ahmedallibhoy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g9v31lb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g9v31lb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ahmedallibhoy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/biocrimed-bladeecity-w3bcam | 3db19a755d67e3070200cf5182a9832b45a5bb50 | 2021-06-16T09:00:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/biocrimed-bladeecity-w3bcam | 8 | null | transformers | 13,099 | ---
language: en
thumbnail: https://www.huggingtweets.com/biocrimed-bladeecity-w3bcam/1623834051692/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1398220397049434117/3i7JMNiF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1399230370109825024/FypJacJv_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1404352885815664642/BEvtg0q4_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">π€ AI CYBORG π€</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">bladee & Nothing person 2 & headaches</div>
<div style="text-align: center; font-size: 14px;">@biocrimed-bladeecity-w3bcam</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from bladee & Nothing person 2 & headaches.
| Data | bladee | Nothing person 2 | headaches |
| --- | --- | --- | --- |
| Tweets downloaded | 1599 | 1863 | 3231 |
| Retweets | 313 | 117 | 62 |
| Short tweets | 486 | 714 | 1451 |
| Tweets kept | 800 | 1032 | 1718 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37jgy6z4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @biocrimed-bladeecity-w3bcam's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1xg0n2ib) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1xg0n2ib/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/biocrimed-bladeecity-w3bcam')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.