modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hsaglamlar/stress_twitter | 6666c77ad4c4a707c9f1891ddd66b78dbd083464 | 2022-07-25T20:55:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:hsaglamlar/autotrain-data-stress_v2",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | hsaglamlar | null | hsaglamlar/stress_twitter | 29 | null | transformers | 7,300 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- hsaglamlar/autotrain-data-stress_v2
co2_eq_emissions: 2.7282806494855265
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1178743973
- CO2 Emissions (in grams): 2.7282806494855265
## Validation Metrics
- Loss: 0.431733638048172
- Accuracy: 0.7976190476190477
- Precision: 0.6918918918918919
- Recall: 0.8205128205128205
- AUC: 0.8952141608391608
- F1: 0.7507331378299119
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/hsaglamlar/autotrain-stress_v2-1178743973
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hsaglamlar/autotrain-stress_v2-1178743973", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hsaglamlar/autotrain-stress_v2-1178743973", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
BossLee/t5-gec | ee42828b55034da69a844b558cf4586249e9c0e2 | 2021-11-11T11:42:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BossLee | null | BossLee/t5-gec | 28 | null | transformers | 7,301 | Entry not found |
BrianTin/MTBERT | cf1b9576c65e8625e10fe2901088f9ffc57645b8 | 2021-05-18T17:08:50.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BrianTin | null | BrianTin/MTBERT | 28 | null | transformers | 7,302 | Entry not found |
CoderEFE/DialoGPT-marxbot | 4e12cbe19700342269fa60b995b72c6b0f88a7ab | 2021-06-07T01:24:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CoderEFE | null | CoderEFE/DialoGPT-marxbot | 28 | null | transformers | 7,303 | ---
tags:
- conversational
---
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("MarxBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
DTAI-KULeuven/robbertje-39-gb-non-shuffled | 7111624edf031ee6da059ee1fbf9a7cb1ac04226 | 2021-08-13T10:51:47.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DTAI-KULeuven | null | DTAI-KULeuven/robbertje-39-gb-non-shuffled | 28 | null | transformers | 7,304 | Entry not found |
EasthShin/BTS_Lyrics_GPT-Neo-base | 592c6518ee954f13614c550629b4f41af38ca6dc | 2021-08-09T05:52:07.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | EasthShin | null | EasthShin/BTS_Lyrics_GPT-Neo-base | 28 | null | transformers | 7,305 | Entry not found |
Helsinki-NLP/opus-mt-bi-en | feb365f89ee1f47cad4f1581896b80ae88978983 | 2021-09-09T21:27:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bi",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bi-en | 28 | null | transformers | 7,306 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bi-en
* source languages: bi
* target languages: en
* OPUS readme: [bi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.en | 30.3 | 0.458 |
|
Helsinki-NLP/opus-mt-ca-pt | 272ac9087d98a17e9e9e0b6fe3628126e03e1099 | 2021-01-18T07:53:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ca",
"pt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ca-pt | 28 | null | transformers | 7,307 | ---
language:
- ca
- pt
tags:
- translation
license: apache-2.0
---
### cat-por
* source group: Catalan
* target group: Portuguese
* OPUS readme: [cat-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.por | 44.9 | 0.658 |
### System Info:
- hf_name: cat-por
- source_languages: cat
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'pt']
- src_constituents: {'cat'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt
- src_alpha3: cat
- tgt_alpha3: por
- short_pair: ca-pt
- chrF2_score: 0.6579999999999999
- bleu: 44.9
- brevity_penalty: 0.953
- ref_len: 5847.0
- src_name: Catalan
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: ca
- tgt_alpha2: pt
- prefer_old: False
- long_pair: cat-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-tn | 39eecd8754063c0a82d0525386c3d8c6ed6df0db | 2021-09-09T21:39:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tn | 28 | null | transformers | 7,308 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tn
* source languages: en
* target languages: tn
* OPUS readme: [en-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tn | 45.5 | 0.636 |
|
Helsinki-NLP/opus-mt-eo-fr | 3d9fb1c4184f318f966f65f1d6fbd9d3e7737d24 | 2021-09-09T21:41:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-fr | 28 | null | transformers | 7,309 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-eo-fr
* source languages: eo
* target languages: fr
* OPUS readme: [eo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.fr | 50.9 | 0.675 |
|
Helsinki-NLP/opus-mt-ny-en | 41bd8d14501ebfb32578c2daf169ed8f2f3ec5da | 2021-09-10T13:59:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ny",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ny-en | 28 | null | transformers | 7,310 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ny-en
* source languages: ny
* target languages: en
* OPUS readme: [ny-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.en | 39.7 | 0.547 |
| Tatoeba.ny.en | 44.2 | 0.562 |
|
Helsinki-NLP/opus-mt-sk-es | ce75e731abfa7c2d2991620c50c49cfbfb9c8ca2 | 2021-09-10T14:03:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sk",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sk-es | 28 | null | transformers | 7,311 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sk-es
* source languages: sk
* target languages: es
* OPUS readme: [sk-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.es | 29.6 | 0.505 |
|
Helsinki-NLP/opus-mt-ss-en | f98578726f54ad3d353a6b3cdbcbd71192567471 | 2021-09-10T14:04:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ss",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ss-en | 28 | null | transformers | 7,312 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ss-en
* source languages: ss
* target languages: en
* OPUS readme: [ss-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ss-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ss.en | 30.9 | 0.478 |
|
Ifromspace/GRIEFSOFT | ed74ab191da8b9bee8fc154393311e8463068c6a | 2022-01-15T13:06:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers",
"4ulan"
] | text-generation | false | Ifromspace | null | Ifromspace/GRIEFSOFT | 28 | 1 | transformers | 7,313 | ---
language:
- ru
tags:
- PyTorch
- Transformers
- 4ulan
---
**Fork of https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2**
Забавное для дискордика))00))
ROADMAP:
- Собираю датасетик из книжек про попаданцев. <------------------------- Сейчас тут.
- Дообучаю.
- Выбрасываю в дискордик.
https://discord.gg/HpeadKH |
KBLab/electra-base-swedish-cased-generator | 9c12ede983d54e382381a3f1a471eab7fd2d244f | 2021-01-20T13:17:06.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KBLab | null | KBLab/electra-base-swedish-cased-generator | 28 | null | transformers | 7,314 | Entry not found |
Lysa/subheading_generator_en | d54538df619bda7cba66deeaa18ae0ae812ee359 | 2021-06-13T17:24:28.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Lysa | null | Lysa/subheading_generator_en | 28 | null | transformers | 7,315 | Entry not found |
PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_author_ishatespeach | c095fe8a019926dcd1387962b0056ba31a0c4959 | 2021-06-23T03:49:54.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PaulAdversarial | null | PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_author_ishatespeach | 28 | null | transformers | 7,316 | ##A T5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN):
* author attribution (train and test sets from the PAN task)
* topic attribution - topics were assigned with BertTopic library using embeddings from `cardiffnlp/bertweet-base-hate` Roberta model (train and test sets from the PAN task)
* hate speech identification (train set from the PAN task)
in order to generate tone of comment use prefix **hater classification:** |
SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune | 60d59123213538e47a6b07b2f4696a774d7c293e | 2021-06-23T07:46:32.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune | 28 | null | transformers | 7,317 | ---
tags:
- summarization
widget:
- text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
---
# CodeTrans model for code documentation generation python
Pretrained model on programming language python using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/python/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
T-Systems-onsite/cross-de-fr-roberta-sentence-transformer | 646c4dfed1135b713568267a0b9b3be14a3f8d1c | 2022-06-28T19:56:37.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"fr",
"de",
"dataset:stsb_multi_mt",
"transformers",
"sentence_embedding",
"search",
"roberta",
"xlm-r-distilroberta-base-paraphrase-v1",
"license:mit"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-de-fr-roberta-sentence-transformer | 28 | null | transformers | 7,318 | ---
language:
- fr
- de
license: mit
tags:
- sentence_embedding
- search
- pytorch
- xlm-roberta
- roberta
- xlm-r-distilroberta-base-paraphrase-v1
datasets:
- stsb_multi_mt
metrics:
- Spearman’s rank correlation
- cosine similarity
---
# Cross German & French RoBERTa for Sentence Embeddings
|
akhooli/gpt2-ar-poetry | 5da7539824e37c256bb30be15f8ec6eaf67c08d3 | 2021-05-21T12:34:58.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | akhooli | null | akhooli/gpt2-ar-poetry | 28 | null | transformers | 7,319 | Entry not found |
albertbn/gpt2-medium-finetuned-ads-fp16-blocksz512 | 818ae136d436ccb298c1a8d22cb31a9485ba5cea | 2021-05-21T12:44:25.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | albertbn | null | albertbn/gpt2-medium-finetuned-ads-fp16-blocksz512 | 28 | null | transformers | 7,320 | Entry not found |
anton-l/wav2vec2-large-xlsr-53-romanian | 2f1e970a98e68daeea93ffa17da566a05b05803c | 2021-07-05T20:20:21.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ro",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-romanian | 28 | null | transformers | 7,321 | ---
language: ro
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Romanian XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ro
type: common_voice
args: ro
metrics:
- name: Test WER
type: wer
value: 24.84
---
# Wav2Vec2-Large-XLSR-53-Romanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ro", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Romanian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ro.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ro/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/ro/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 24.84 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
bertin-project/bertin-base-random | 754d33bc2bb3227fc4e61cf4151b4bfbbc30986f | 2021-09-23T13:42:00.000Z | [
"pytorch",
"jax",
"tensorboard",
"joblib",
"roberta",
"fill-mask",
"es",
"transformers",
"spanish",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | bertin-project | null | bertin-project/bertin-base-random | 28 | null | transformers | 7,322 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
pipeline_tag: fill-mask
widget:
- text: Fui a la librería a comprar un <mask>.
---
This is a **RoBERTa-base** model trained from scratch in Spanish.
The training dataset is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is random.
This model has been trained for 230.000 steps (early stopped before the 250k intended steps).
Please see our main [card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for more information.
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo)) |
bertin-project/bertin-base-stepwise-exp-512seqlen | 62feafbd538396054ae69497ac49f6a07a3f2e20 | 2021-09-23T13:42:03.000Z | [
"pytorch",
"jax",
"tensorboard",
"joblib",
"roberta",
"fill-mask",
"es",
"transformers",
"spanish",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | bertin-project | null | bertin-project/bertin-base-stepwise-exp-512seqlen | 28 | null | transformers | 7,323 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
pipeline_tag: fill-mask
widget:
- text: Fui a la librería a comprar un <mask>.
---
This is a **RoBERTa-base** model trained from scratch in Spanish.
The training dataset is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts).
This model takes the one using [sequence length 128](https://huggingface.co/bertin-project/bertin-base-stepwise) and trains during 25.000 steps using sequence length 512.
Please see our main [card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for more information.
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo))
|
bertin-project/bertin-base-stepwise | 83a53bbd6807a25fc0dd8712e096b7501b2235bb | 2021-09-23T13:42:06.000Z | [
"pytorch",
"jax",
"tensorboard",
"joblib",
"roberta",
"fill-mask",
"es",
"transformers",
"spanish",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | bertin-project | null | bertin-project/bertin-base-stepwise | 28 | null | transformers | 7,324 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
pipeline_tag: fill-mask
widget:
- text: Fui a la librería a comprar un <mask>.
---
This is a **RoBERTa-base** model trained from scratch in Spanish.
The training dataset is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (defining perplexity boundaries based on quartiles), discarding more often documents with very large values (Q4, poor quality) of very small values (Q1, short, repetitive texts).
This model has been trained for 180.000 steps (early stopped from 250k intended steps).
Please see our main [card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for more information.
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo)) |
ccdv/lsg-pegasus-large-4096 | 9c4cb9f8f9ba229d5122302c010601a834553e22 | 2022-07-25T18:11:34.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"arxiv:1912.08777",
"transformers",
"summarization",
"long context",
"fill-mask",
"autotrain_compatible"
] | fill-mask | false | ccdv | null | ccdv/lsg-pegasus-large-4096 | 28 | null | transformers | 7,325 | ---
tags:
- summarization
- pegasus
- long context
language:
- en
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [Pegasus-large](https://huggingface.co/google/pegasus-large) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**Pegasus**
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
connorboyle/bert-ner-i2b2 | 9769589dd5855cf38172551d476dc4799de141f2 | 2021-12-01T00:13:39.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | connorboyle | null | connorboyle/bert-ner-i2b2 | 28 | 1 | transformers | 7,326 | Named-entity recognition model trained on the I2B2 training data set for PHI.
|
cross-encoder/msmarco-MiniLM-L12-en-de-v1 | 5d615f8f86798d9fb89dfd6d3fba53817acd45cf | 2021-08-05T08:40:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | cross-encoder | null | cross-encoder/msmarco-MiniLM-L12-en-de-v1 | 28 | null | transformers | 7,327 | ---
license: apache-2.0
---
# Cross-Encoder for MS MARCO - EN-DE
This is a cross-lingual Cross-Encoder model for EN-DE that can be used for passage re-ranking. It was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html).
The training code is available in this repository, see `train_script.py`.
## Usage with SentenceTransformers
When you have [SentenceTransformers](https://www.sbert.net/) installed, you can use the model like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
query = 'How many people live in Berlin?'
docs = ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.']
pairs = [(query, doc) for doc in docs]
scores = model.predict(pairs)
```
## Usage with Transformers
With the transformers library, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Performance
The performance was evaluated on three datasets:
- **TREC-DL19 EN-EN**: The original [TREC 2019 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019.html): Given an English query and 1000 documents (retrieved by BM25 lexical search), rank documents with according to their relevance. We compute NDCG@10. BM25 achieves a score of 45.46, a perfect re-ranker can achieve a score of 95.47.
- **TREC-DL19 DE-EN**: The English queries of TREC-DL19 have been translated by a German native speaker to German. We rank the German queries versus the English passages from the original TREC-DL19 setup. We compute NDCG@10.
- **GermanDPR DE-DE**: The [GermanDPR](https://www.deepset.ai/germanquad) dataset provides German queries and German passages from Wikipedia. We indexed the 2.8 Million paragraphs from German Wikipedia and retrieved for each query the top 100 most relevant passages using BM25 lexical search with Elasticsearch. We compute MRR@10. BM25 achieves a score of 35.85, a perfect re-ranker can achieve a score of 76.27.
We also check the performance of bi-encoders using the same evaluation: The retrieved documents from BM25 lexical search are re-ranked using query & passage embeddings with cosine-similarity. Bi-Encoders can also be used for end-to-end semantic search.
| Model-Name | TREC-DL19 EN-EN | TREC-DL19 DE-EN | GermanDPR DE-DE | Docs / Sec |
| ------------- |:-------------:| :-----: | :---: | :----: |
| BM25 | 45.46 | - | 35.85 | -|
| **Cross-Encoder Re-Rankers** | | | |
| [cross-encoder/msmarco-MiniLM-L6-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L6-en-de-v1) | 72.43 | 65.53 | 46.77 | 1600 |
| [cross-encoder/msmarco-MiniLM-L12-en-de-v1](https://huggingface.co/cross-encoder/msmarco-MiniLM-L12-en-de-v1) | 72.94 | 66.07 | 49.91 | 900 |
| [svalabs/cross-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/cross-electra-ms-marco-german-uncased) (DE only) | - | - | 53.67 | 260 |
| [deepset/gbert-base-germandpr-reranking](https://huggingface.co/deepset/gbert-base-germandpr-reranking) (DE only) | - | - | 53.59 | 260 |
| **Bi-Encoders (re-ranking)** | | | |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) | 63.38 | 58.28 | 37.88 | 940 |
| [sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch](https://huggingface.co/sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-trained-scratch) | 65.51 | 58.69 | 38.32 | 940 |
| [svalabs/bi-electra-ms-marco-german-uncased](https://huggingface.co/svalabs/bi-electra-ms-marco-german-uncased) (DE only) | - | - | 34.31 | 450 |
| [deepset/gbert-base-germandpr-question_encoder](https://huggingface.co/deepset/gbert-base-germandpr-question_encoder) (DE only) | - | - | 42.55 | 450 |
Note: Docs / Sec gives the number of (query, document) pairs we can re-rank within a second on a V100 GPU.
|
cvcio/mediawatch-el-topics | 2ca63fb4f74ab3c8f5c2058cabe20f1c39d6197e | 2022-02-20T12:26:45.000Z | [
"pytorch",
"roberta",
"text-classification",
"el",
"transformers",
"Greek",
"news",
"license:gpl-3.0",
"model-index"
] | text-classification | false | cvcio | null | cvcio/mediawatch-el-topics | 28 | null | transformers | 7,328 | ---
language: el
license: gpl-3.0
tags:
- roberta
- Greek
- news
- transformers
- text-classification
pipeline_tag: text-classification
model-index:
- name: mediawatch-el-topics
results:
- task:
type: text-classification
name: Multi Label Text Classification
metrics:
- type: roc_auc
value: 98.55
name: ROCAUC
- type: eval_AFFAIRS
value: 98.72
name: AFFAIRS
- type: eval_AGRICULTURE
value: 97.99
name: AGRICULTURE
- type: eval_ARTS_AND_CULTURE
value: 98.38
name: ARTS_AND_CULTURE
- type: eval_BREAKING_NEWS
value: 96.75
name: BREAKING_NEWS
- type: eval_BUSINESS
value: 98.11
name: BUSINESS
- type: eval_COVID
value: 96.2
name: COVID
- type: eval_CRIME
value: 98.85
name: CRIME
- type: eval_ECONOMY
value: 97.65
name: ECONOMY
- type: eval_EDUCATION
value: 98.65
name: EDUCATION
- type: eval_ELECTIONS
value: 99.4
name: ELECTIONS
- type: eval_ENTERTAINMENT
value: 99.25
name: ENTERTAINMENT
- type: eval_ENVIRONMENT
value: 98.47
name: ENVIRONMENT
- type: eval_FOOD
value: 99.34
name: FOOD
- type: eval_HEALTH
value: 97.23
name: HEALTH
- type: eval_INTERNATIONAL
value: 96.24
name: INTERNATIONAL
- type: eval_JUSTICE
value: 98.62
name: JUSTICE
- type: eval_LAW_AND_ORDER
value: 91.77
name: LAW_AND_ORDER
- type: eval_MILITARY
value: 98.38
name: MILITARY
- type: eval_NON_PAPER
value: 95.95
name: NON_PAPER
- type: eval_OPINION
value: 96.24
name: OPINION
- type: eval_POLITICS
value: 97.73
name: POLITICS
- type: eval_REFUGEE
value: 99.49
name: REFUGEE
- type: eval_REGIONAL
value: 95.2
name: REGIONAL
- type: eval_RELIGION
value: 99.22
name: RELIGION
- type: eval_SCIENCE
value: 98.37
name: SCIENCE
- type: eval_SOCIAL_MEDIA
value: 99.1
name: SOCIAL_MEDIA
- type: eval_SOCIETY
value: 94.39
name: SOCIETY
- type: eval_SPORTS
value: 99.39
name: SPORTS
- type: eval_TECH
value: 99.23
name: TECH
- type: eval_TOURISM
value: 99.0
name: TOURISM
- type: eval_TRANSPORT
value: 98.79
name: TRANSPORT
- type: eval_TRAVEL
value: 98.32
name: TRAVEL
- type: eval_WEATHER
value: 99.5
name: WEATHER
widget:
- text: "Παρ’ ολίγον «θερμό» επεισόδιο τουρκικού πολεμικού πλοίου με ελληνικό ωκεανογραφικό στην περιοχή μεταξύ Ρόδου και Καστελόριζου, στο διάστημα 20-23 Σεπτεμβρίου, αποκάλυψε το ΟΡΕΝ. Σύμφωνα με πληροφορίες που μετέδωσε το κεντρικό δελτίο ειδήσεων, όταν το ελληνικό ερευνητικό « ΑΙΓΑΙΟ » που ανήκει στο Ελληνικό Κέντρο Θαλασσίων Ερευνών βγήκε έξω από τα 6 ν.μ, σε διεθνή ύδατα, το προσέγγισε τουρκικό πολεμικό πλοίο, ο κυβερνήτης του οποίου ζήτησε δύο φορές μέσω ασυρμάτου να ενημερωθεί για τα στοιχεία του πλοίου, αλλά και για την αποστολή του. Ο πλοίαρχος του ελληνικού ερευνητικού δεν απάντησε και τελικά το τουρκικό πολεμικό απομακρύνθηκε."
example_title: Topic AFFAIRS
- text: "Η κυβερνητική ανικανότητα οδηγεί την χώρα στο χάος. Η κυβερνηση Μητσοτακη αδυνατεί να διαχειριστεί την πανδημία. Δεν μπορει ούτε να πείσει τον κόσμο να εμβολιαστεί, που ήταν το πιο απλο πράγμα. Σημερα λοιπόν φτάσαμε στο σημείο να μιλάμε για επαναφορά της χρήσης μάσκας σε εξωτερικούς χώρους ακόμη και όπου δεν υπάρχει συγχρωτισμός. Στις συζητήσεις των ειδικών θα βρεθεί επίσης το ενδεχόμενο για τοπικά lockdown σε περιοχές με βαρύ ιικό φορτίο για να μην ξεφύγει η κατάσταση, ενώ θα χρειάζεται κάποιος για τις μετακινήσεις του είτε πιστοποιητικό εμβολιασμού ή νόσησης και οι ανεμβολίαστοι rapid ή μοριακό τεστ."
example_title: Topic COVID
- text: "Η «ωραία Ελένη» επέστρεψε στην τηλεόραση, μέσα από τη συχνότητα του MEGA και άφησε τις καλύτερες εντυπώσεις. Το πλατό από το οποίο εμφανίζεται η Ελένη Μενεγάκη έχει φτιαχτεί από την αρχή για την εκπομπή της. Σήμερα, στο κλείσιμο της εκπομπής η Ελένη πέρασε ανάμεσα από τις κάμερες για να μπει στο καμαρίνι της «Μην τρομοκρατείστε, είμαι η Ελένη Μενεγάκη, τα κάνω αυτά. Με συγχωρείται, έχω ψυχολογικά αν δεν είμαι ελεύθερη» είπε αρχικά η παρουσιάστρια στους συνεργάτες της και πρόσθεσε στη συνέχεια: «Η Ελένη ολοκλήρωσε. Μπορείτε να συνεχίσετε με το υπόλοιπο πρόγραμμα του Mega. Εγώ ανοίγω το καμαρίνι, αν με αφήσουν. Μπαίνω καμαρίνι». Δείτε το απόσπασμα!"
example_title: Topic ENTERTAINMENT
- text: "Ένα εξαιρετικά ενδιαφέρον «κουτσομπολιό» εντόπισαν οι κεραίες της στήλης πέριξ του Μεγάρου Μαξίμου : το κατά πόσον, δηλαδή, ο «εξ απορρήτων» του Κυριάκου Μητσοτάκη , Γιώργος Γεραπετρίτης μετέχει στη διαχείριση της πανδημίας και στην διαδικασία λήψης αποφάσεων. Το εν λόγω «κουτσομπολιό» πυροδότησε το γεγονός ότι σε σαββατιάτικη εφημερίδα δημοσιεύθηκαν προχθές δηλώσεις του υπουργού Επικρατείας με τις οποίες απέκλειε κάθε σενάριο νέων οριζόντιων μέτρων και την ίδια ώρα, το Μαξίμου ανήγγελλε… καραντίνα στη Μύκονο. «Είναι αυτονόητο ότι η κοινωνία και η οικονομία δεν αντέχουν οριζόντιους περιορισμούς», έλεγε χαρακτηριστικά ο Γεραπετρίτης, την ώρα που η κυβέρνηση ανακοίνωνε… αυτούς τους οριζόντιους περιορισμούς. Ως εκ τούτων, δύο τινά μπορεί να συμβαίνουν: είτε ο υπουργός Επικρατείας δεν μετέχει πλέον στη λήψη των αποφάσεων, είτε η απόφαση για οριζόντια μέτρα ελήφθη υπό το κράτος πανικού το πρωί του Σαββάτου, όταν έφτασε στο Μαξίμου η τελευταία «φουρνιά» των επιδημιολογικών δεδομένων για το νησί των ανέμων…"
example_title: Topic NON_PAPER
- text: "Είναι ξεκάθαρο ότι μετά το πλήγμα που δέχθηκε η κυβέρνησή του από τις αδυναμίες στην αντιμετώπιση των καταστροφικών πυρκαγιών το μεγάλο στοίχημα για τον Κυριάκο Μητσοτάκη είναι να προχωρήσει συντεταγμένα και χωρίς παρατράγουδα ο σχεδιασμός για την αποκατάσταση των ζημιών. Ο Πρωθυπουργός έχει ήδη φτιάξει μια ομάδα κρούσης την οποία αποτελούν 9 υπουργοί. Τα μέλη που απαρτίζουν την ομάδα κρούσης και τα οποία βρίσκονται σε συνεχή, καθημερινή επαφή με τον Κυριάκο Μητσοτάκη είναι, όπως μας πληροφορεί η στήλη «Θεωρείο» της «Καθημερινής» είναι οι: Γ. Γεραπετρίτης, Α. Σκέρτσος, Χρ. Τριαντόπουλος, Κ. Καραμανλής, Κ. Σκρέκας, Στ. Πέτσας, Σπ. Λιβανός και φυσικά οι Χρ. Σταικούρας και Θ. Σκυλακάκης."
example_title: Topic OPINION
---
**Disclaimer**: *This model is still under testing and may change in the future, we will try to keep backwards compatibility. For any questions reach us at [email protected]*
# MediaWatch News Topics (Greek)
Fine-tuned model for multi-label text-classification (SequenceClassification), based on [roberta-el-news](https://huggingface.co/cvcio/roberta-el-news), using [Hugging Face's](https://huggingface.co/) [Transformers](https://github.com/huggingface/transformers) library. This model is to classify news in real-time on upto 33 topics including: *AFFAIRS*, *AGRICULTURE*, *ARTS_AND_CULTURE*, *BREAKING_NEWS*, *BUSINESS*, *COVID*, *ECONOMY*, *EDUCATION*, *ELECTIONS*, *ENTERTAINMENT*, *ENVIRONMENT*, *FOOD*, *HEALTH*, *INTERNATIONAL*, *LAW_AND_ORDER*, *MILITARY*, *NON_PAPER*, *OPINION*, *POLITICS*, *REFUGEE*, *REGIONAL*, *RELIGION*, *SCIENCE*, *SOCIAL_MEDIA*, *SOCIETY*, *SPORTS*, *TECH*, *TOURISM*, *TRANSPORT*, *TRAVEL*, *WEATHER*, *CRIME*, *JUSTICE*.
## How to use
You can use this model directly with a pipeline for text-classification:
```python
from transformers import pipeline
pipe = pipeline(
task="text-classification",
model="cvcio/mediawatch-el-topics",
tokenizer="cvcio/roberta-el-news" # or cvcio/mediawatch-el-topics
)
topics = pipe(
"Η βιασύνη αρκετών χωρών να άρουν τους περιορισμούς κατά του κορονοϊού, "+
"αν όχι να κηρύξουν το τέλος της πανδημίας, με το σκεπτικό ότι έφτασε "+
"πλέον η ώρα να συμβιώσουμε με την Covid-19, έχει κάνει μερικούς πιο "+
"επιφυλακτικούς επιστήμονες να προειδοποιούν ότι πρόκειται μάλλον "+
"για «ενδημική αυταπάτη» και ότι είναι πρόωρη τέτοια υπερβολική "+
"χαλάρωση. Καθώς τα κρούσματα της Covid-19, μετά το αιφνιδιαστικό "+
"μαζικό κύμα της παραλλαγής Όμικρον, εμφανίζουν τάση υποχώρησης σε "+
"Ευρώπη και Βόρεια Αμερική, όπου περισσεύει η κόπωση μεταξύ των "+
"πολιτών μετά από δύο χρόνια πανδημίας, ειδικοί και μη αδημονούν να "+
"«ξεμπερδέψουν» με τον κορονοϊό.",
padding=True,
truncation=True,
max_length=512,
return_all_scores=True
)
print(topics)
# outputs
[
[
{'label': 'AFFAIRS', 'score': 0.0018806682201102376},
{'label': 'AGRICULTURE', 'score': 0.00014653144171461463},
{'label': 'ARTS_AND_CULTURE', 'score': 0.0012948638759553432},
{'label': 'BREAKING_NEWS', 'score': 0.0001729220530251041},
{'label': 'BUSINESS', 'score': 0.0028276608791202307},
{'label': 'COVID', 'score': 0.4407998025417328},
{'label': 'ECONOMY', 'score': 0.039826102554798126},
{'label': 'EDUCATION', 'score': 0.0019098613411188126},
{'label': 'ELECTIONS', 'score': 0.0003333651984576136},
{'label': 'ENTERTAINMENT', 'score': 0.004249618388712406},
{'label': 'ENVIRONMENT', 'score': 0.0015828514005988836},
{'label': 'FOOD', 'score': 0.0018390495097264647},
{'label': 'HEALTH', 'score': 0.1204477995634079},
{'label': 'INTERNATIONAL', 'score': 0.25892165303230286},
{'label': 'LAW_AND_ORDER', 'score': 0.07646272331476212},
{'label': 'MILITARY', 'score': 0.00033025629818439484},
{'label': 'NON_PAPER', 'score': 0.011991199105978012},
{'label': 'OPINION', 'score': 0.16166265308856964},
{'label': 'POLITICS', 'score': 0.0008890336030162871},
{'label': 'REFUGEE', 'score': 0.0011504743015393615},
{'label': 'REGIONAL', 'score': 0.0008734092116355896},
{'label': 'RELIGION', 'score': 0.0009001944563351572},
{'label': 'SCIENCE', 'score': 0.05075162276625633},
{'label': 'SOCIAL_MEDIA', 'score': 0.00039615994319319725},
{'label': 'SOCIETY', 'score': 0.0043518817983567715},
{'label': 'SPORTS', 'score': 0.002416545059531927},
{'label': 'TECH', 'score': 0.0007818648009561002},
{'label': 'TOURISM', 'score': 0.011870541609823704},
{'label': 'TRANSPORT', 'score': 0.0009422845905646682},
{'label': 'TRAVEL', 'score': 0.03004464879631996},
{'label': 'WEATHER', 'score': 0.00040286066359840333},
{'label': 'CRIME', 'score': 0.0005416403291746974},
{'label': 'JUSTICE', 'score': 0.000990519649349153}
]
]
```
## Labels
All labels, except *NON_PAPER*, retrieved by source articles during the data collection step, without any preprocessing, assuming that journalists and newsrooms assign correct tags to the articles. We disregarded all articles with more than 6 tags to reduce bias and tag manipulation.
| label | roc_auc | samples |
|-------:|--------:|--------:|
| AFFAIRS | 0.9872 | 6,314 |
| AGRICULTURE | 0.9799 | 1,254 |
| ARTS_AND_CULTURE | 0.9838 | 15,968 |
| BREAKING_NEWS | 0.9675 | 827 |
| BUSINESS | 0.9811 | 6,507 |
| COVID | 0.9620 | 50,000 |
| CRIME | 0.9885 | 34,421 |
| ECONOMY | 0.9765 | 45,474 |
| EDUCATION | 0.9865 | 10,111 |
| ELECTIONS | 0.9940 | 7,571 |
| ENTERTAINMENT | 0.9925 | 23,323 |
| ENVIRONMENT | 0.9847 | 23,060 |
| FOOD | 0.9934 | 3,712 |
| HEALTH | 0.9723 | 16,852 |
| INTERNATIONAL | 0.9624 | 50,000 |
| JUSTICE | 0.9862 | 4,860 |
| LAW_AND_ORDER | 0.9177 | 50,000 |
| MILITARY | 0.9838 | 6,536 |
| NON_PAPER | 0.9595 | 4,589 |
| OPINION | 0.9624 | 6,296 |
| POLITICS | 0.9773 | 50,000 |
| REFUGEE | 0.9949 | 4,536 |
| REGIONAL | 0.9520 | 50,000 |
| RELIGION | 0.9922 | 11,533 |
| SCIENCE | 0.9837 | 1,998 |
| SOCIAL_MEDIA | 0.991 | 6,212 |
| SOCIETY | 0.9439 | 50,000 |
| SPORTS | 0.9939 | 31,396 |
| TECH | 0.9923 | 8,225 |
| TOURISM | 0.9900 | 8,081 |
| TRANSPORT | 0.9879 | 3,211 |
| TRAVEL | 0.9832 | 4,638 |
| WEATHER | 0.9950 | 19,931 |
| loss | 0.0533 | - |
| roc_auc | 0.9855 | - |
## Pretraining
The model was pretrained using an NVIDIA A10 GPU for 15 epochs (~ approx 59K steps, 8 hours training) with a batch size of 128. The optimizer used is Adam with a learning rate of 1e-5, and weight decay 0.01. We used roc_auc_micro to evaluate the results.
### Framework versions
- Transformers 4.13.0
- Pytorch 1.9.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
## Authors
Dimitris Papaevagelou - [@andefined](https://github.com/andefined)
## About Us
[Civic Information Office](https://cvcio.org/) is a Non Profit Organization based in Athens, Greece focusing on creating technology and research products for the public interest. |
dbernsohn/roberta-javascript | bea9a1d2c8158136f315fb634c47bb34197db42e | 2021-05-20T15:55:17.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"javascript",
"dataset:code_search_net",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbernsohn | null | dbernsohn/roberta-javascript | 28 | null | transformers | 7,329 | # roberta-javascript
---
language: javascript
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **javascript** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-javascript")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-javascript")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
var i;
for (i = 0; i < cars.<mask>; i++) {
text += cars[i] + "<br>";
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('length', 0.9959614872932434),
# ('i', 0.00027875584783032537),
# ('len', 0.0002283261710545048),
# ('nodeType', 0.00013731322542298585),
# ('index', 7.5289819505997e-05)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/) |
dingkun/retrievalv1 | f7b16d1a858f41554769fbebfd7b2ce8d598f420 | 2022-06-20T03:00:26.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dingkun | null | dingkun/retrievalv1 | 28 | null | transformers | 7,330 | Entry not found |
emre/wav2vec-tr-lite-AG | 7e8302e4b06020fabfaf5d6dbcb86e7cb108a757 | 2021-12-10T22:46:25.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | emre | null | emre/wav2vec-tr-lite-AG | 28 | null | transformers | 7,331 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Davut Emre TASAR
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
---
# wav2vec-tr-lite-AG
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("emre/wav2vec-tr-lite-AG")
model = Wav2Vec2ForCTC.from_pretrained("emre/wav2vec-tr-lite-AG")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4388 | 3.7 | 400 | 1.366 | 0.9701 |
| 0.3766 | 7.4 | 800 | 0.4914 | 0.5374 |
| 0.2295 | 11.11 | 1200 | 0.3934 | 0.4125 |
| 0.1121 | 14.81 | 1600 | 0.3264 | 0.2904 |
| 0.1473 | 18.51 | 2000 | 0.3103 | 0.2671 |
| 0.1013 | 22.22 | 2400 | 0.2589 | 0.2324 |
| 0.0704 | 25.92 | 2800 | 0.2826 | 0.2339 |
| 0.0537 | 29.63 | 3200 | 0.2704 | 0.2309 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
hgw3lss/gpt-j-6B-Buckland | b460b2de16f21348418ca454c48893b6ff93a73d | 2022-02-12T15:31:01.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | hgw3lss | null | hgw3lss/gpt-j-6B-Buckland | 28 | null | transformers | 7,332 | Entry not found |
huggingtweets/dril-fart-horse_ebooks | fb50047f0c95140282dd089e2ebe097ef4d83f8d | 2021-12-23T01:55:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dril-fart-horse_ebooks | 28 | 1 | transformers | 7,333 | ---
language: en
thumbnail: http://www.huggingtweets.com/dril-fart-horse_ebooks/1640224513212/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1422460373152583683/d1k9xcgN_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1096005346/1_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & jon (PERSECUTED) & Horse ebooks</div>
<div style="text-align: center; font-size: 14px;">@dril-fart-horse_ebooks</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & jon (PERSECUTED) & Horse ebooks.
| Data | wint | jon (PERSECUTED) | Horse ebooks |
| --- | --- | --- | --- |
| Tweets downloaded | 3226 | 3217 | 3200 |
| Retweets | 477 | 571 | 0 |
| Short tweets | 306 | 558 | 421 |
| Tweets kept | 2443 | 2088 | 2779 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wf0ppmhi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-fart-horse_ebooks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/unmddioo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/unmddioo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-fart-horse_ebooks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/femoidfurry | b867b94353416af9144f5b7ff5d8a384e04dd769 | 2021-09-17T01:24:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/femoidfurry | 28 | null | transformers | 7,334 | ---
language: en
thumbnail: https://www.huggingtweets.com/femoidfurry/1631841845149/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1431013593366122500/8WydbuRe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">reinforced stan acc (gfm pinned)</div>
<div style="text-align: center; font-size: 14px;">@femoidfurry</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from reinforced stan acc (gfm pinned).
| Data | reinforced stan acc (gfm pinned) |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 1497 |
| Short tweets | 193 |
| Tweets kept | 1526 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vm7b6sqf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @femoidfurry's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/14h9aw0o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/14h9aw0o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/femoidfurry')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/heatherchungus | 3c868dc96e6177dd8dc597b57155aadd7a1f17c8 | 2021-05-22T06:44:28.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/heatherchungus | 28 | null | transformers | 7,335 | ---
language: en
thumbnail: https://www.huggingtweets.com/heatherchungus/1617912956937/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1376723854031212546/NlgDr-ha_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">heather (TRUE)² 🍁 ✝ ⚜ 🤖 AI Bot </div>
<div style="font-size: 15px">@heatherchungus bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@heatherchungus's tweets](https://twitter.com/heatherchungus).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 84 |
| Short tweets | 1058 |
| Tweets kept | 2090 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kha682j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @heatherchungus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/duib9vv9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/duib9vv9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/heatherchungus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/stockstotrade | e8bb1fe59d3d111de64994ff190b786983a7a740 | 2021-11-19T03:41:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/stockstotrade | 28 | null | transformers | 7,336 | ---
language: en
thumbnail: https://www.huggingtweets.com/stockstotrade/1637293295111/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/469936583416610816/EZt8Vl04_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">StocksToTrade</div>
<div style="text-align: center; font-size: 14px;">@stockstotrade</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from StocksToTrade.
| Data | StocksToTrade |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 663 |
| Short tweets | 360 |
| Tweets kept | 2215 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c33zwruj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stockstotrade's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1upgfq9z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1upgfq9z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stockstotrade')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
icelab/spaceroberta_CR | 4b002f0f633b57e112e14633c169ab7441d428a5 | 2022-02-16T09:30:10.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | icelab | null | icelab/spaceroberta_CR | 28 | null | transformers | 7,337 | ---
widget:
- text: "The CubeSat RF design shall either have one RF inhibit and a RF power output no greater than 1.5W at the transmitter antenna's RF input OR the CubeSat shall have a minimum of two independent RF inhibits (CDS 3.3.9) (ISO 5.5.6)."
---
---
# spaceroberta_CR
## Model desciption
This is fine-tuned SpaceSciBERT model, for a Concept Recognition task, from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The [fine-tuning](https://github.com/strath-ace/smart-nlp/blob/master/SpaceTransformers/CR/CR_ECSS_dataset.json) dataset is available for download and consists of 874 unique manually annotated ECSS requirements.
The notebook for fine-tuning can be accessed in Google Colab:
[](https://colab.research.google.com/drive/1EGh9bdxq6RqIzbvKuptAWvmIBG2EQJzJ?usp=sharing)
### BibTeX entry and citation info
```
@ARTICLE{ 9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659} }
``` |
kaesve/BERT_patent_reference_extraction | c9dcc71e951855ed35b5cd0f3def453794f540a4 | 2021-05-19T20:57:51.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:2101.01039",
"transformers",
"autotrain_compatible"
] | fill-mask | false | kaesve | null | kaesve/BERT_patent_reference_extraction | 28 | null | transformers | 7,338 | # Reference extraction in patents
This repository contains a finetuned BERT model that can extract references to scientific literature from patents.
See https://github.com/kaesve/patent-citation-extraction and https://arxiv.org/abs/2101.01039 for more information. |
malay-huggingface/albert-tiny-bahasa-cased | 6d6150b8fabae7e55cffc7ace626111bb8a67f2e | 2021-09-26T12:37:13.000Z | [
"pytorch",
"albert",
"fill-mask",
"ms",
"transformers",
"autotrain_compatible"
] | fill-mask | false | malay-huggingface | null | malay-huggingface/albert-tiny-bahasa-cased | 28 | null | transformers | 7,339 | ---
language: ms
---
# albert-tiny-bahasa-cased
Pretrained ALBERT tiny language model for Malay.
## Pretraining Corpus
`albert-tiny-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/albert](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/albert).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AlbertTokenizer, AlbertModel
model = AlbertModel.from_pretrained('malay-huggingface/albert-tiny-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'malay-huggingface/albert-tiny-bahasa-cased',
do_lower_case = False,
)
```
## Example using AutoModelWithLMHead
```python
from transformers import AlbertTokenizer, AlbertForMaskedLM, pipeline
model = AlbertForMaskedLM.from_pretrained('malay-huggingface/albert-tiny-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'malay-huggingface/albert-tiny-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('Permohonan Najib, anak untuk dengar isu perlembagaan [MASK] .')
```
Output is,
```text
[{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan Malaysia.',
'score': 0.09178723394870758,
'token': 1957,
'token_str': 'M a l a y s i a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan negara.',
'score': 0.053524162620306015,
'token': 2134,
'token_str': 'n e g a r a'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan dikemukakan.',
'score': 0.031137527897953987,
'token': 9383,
'token_str': 'd i k e m u k a k a n'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan 1MDB.',
'score': 0.02826082520186901,
'token': 13838,
'token_str': '1 M D B'},
{'sequence': 'Permohonan Najib, anak untuk dengar isu perlembagaan ditolak.',
'score': 0.026568090543150902,
'token': 11465,
'token_str': 'd i t o l a k'}]
```
|
microsoft/unispeech-sat-base-100h-libri-ft | b294556d8d77cc539f9d08bef0ddbb0f60985328 | 2021-11-04T15:26:40.000Z | [
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2110.05752",
"transformers",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | microsoft | null | microsoft/unispeech-sat-base-100h-libri-ft | 28 | 3 | transformers | 7,340 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# UniSpeech-SAT-Base-Finetuned-100h-Libri
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
A [unispeech-sat-base model]( ) that was fine-tuned on 100h hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
The model was fine-tuned on:
- 100 hours of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, UniSpeechSatForCTC
from datasets import load_dataset
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
model = UniSpeechSatForCTC.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
# load dummy dataset
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
mlcorelib/deberta-base-uncased | 190353c0376b3a54214228847caa4979f99ec99a | 2021-05-01T12:33:45.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | mlcorelib | null | mlcorelib/deberta-base-uncased | 28 | null | transformers | 7,341 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
mudes/en-base | 1600298e0458d961c2c749da7a098f69b4be478a | 2021-05-20T01:03:44.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"en",
"arxiv:2102.09665",
"arxiv:2104.04630",
"transformers",
"mudes",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | mudes | null | mudes/en-base | 28 | 1 | transformers | 7,342 | ---
language: en
tags:
- mudes
license: apache-2.0
---
# MUDES - {Mu}ltilingual {De}tection of Offensive {S}pans
We provide state-of-the-art models to detect toxic spans in social media texts. We introduce our framework in [this paper](https://arxiv.org/abs/2102.09665). We have evaluated our models on Toxic Spans task at SemEval 2021 (Task 5). Our participation in the task is detailed in [this paper](https://arxiv.org/abs/2104.04630).
## Usage
You can use this model when you have [MUDES](https://github.com/TharinduDR/MUDES) installed:
```bash
pip install mudes
```
Then you can use the model like this:
```python
from mudes.app.mudes_app import MUDESApp
app = MUDESApp("en-base", use_cuda=False)
print(app.predict_toxic_spans("You motherfucking cunt", spans=True))
```
## System Demonstration
An experimental demonstration interface called MUDES-UI has been released on [GitHub](https://github.com/TharinduDR/MUDES-UI) and can be checked out in [here](http://rgcl.wlv.ac.uk/mudes/).
## Citing & Authors
If you find this model helpful, feel free to cite our publications
```bibtex
@inproceedings{ranasinghemudes,
title={{MUDES: Multilingual Detection of Offensive Spans}},
author={Tharindu Ranasinghe and Marcos Zampieri},
booktitle={Proceedings of NAACL},
year={2021}
}
```
```bibtex
@inproceedings{ranasinghe2021semeval,
title={{WLV-RIT at SemEval-2021 Task 5: A Neural Transformer Framework for Detecting Toxic Spans}},
author = {Ranasinghe, Tharindu and Sarkar, Diptanu and Zampieri, Marcos and Ororbia, Alex},
booktitle={Proceedings of SemEval},
year={2021}
}
``` |
ncoop57/multi-code-clippy | 380fc6200e5aa4fbf17dc696d23f29cfdfea8d58 | 2022-03-03T12:44:46.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | ncoop57 | null | ncoop57/multi-code-clippy | 28 | null | transformers | 7,343 | Entry not found |
nepp1d0/ChemBERTa_drug_state_classification | 080708b8d69082d295aa88ccbec56b33a4205501 | 2022-04-13T18:27:48.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | nepp1d0 | null | nepp1d0/ChemBERTa_drug_state_classification | 28 | null | transformers | 7,344 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ChemBERTa_drug_state_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ChemBERTa_drug_state_classification
This model is a fine-tuned version of [nepp1d0/ChemBERTa_drug_state_classification](https://huggingface.co/nepp1d0/ChemBERTa_drug_state_classification) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0463
- Accuracy: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5063 | 1.0 | 240 | 0.3069 | 0.9160 |
| 0.3683 | 2.0 | 480 | 0.2135 | 0.9431 |
| 0.2633 | 3.0 | 720 | 0.1324 | 0.9577 |
| 0.1692 | 4.0 | 960 | 0.0647 | 0.9802 |
| 0.1109 | 5.0 | 1200 | 0.0463 | 0.9870 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.1
|
othrif/wav2vec2-large-xlsr-arabic | 28096ade748ce453de4df34305836c6f2167178f | 2021-03-29T18:43:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | othrif | null | othrif/wav2vec2-large-xlsr-arabic | 28 | null | transformers | 7,345 | ---
language: ar
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Arabic by Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 46.77
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\؛\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\_get\\\\\\\\\\\\\\\\«\\\\\\\\\\\\\\\\»\\\\\\\\\\\\\\\\ـ\\\\\\\\\\\\\\\\ـ\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\#\\\\\\\\\\\\\\\\،\\\\\\\\\\\\\\\\☭,\\\\\\\\\\\\\\\\؟]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 46.77
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://huggingface.co/othrif/wav2vec2-large-xlsr-arabic/tree/main) |
pere/norwegian-gpt2 | be4f5522d2d7f5274d4800eb0579dab3102b0a09 | 2021-09-23T16:19:24.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"no",
"dataset:oscar",
"transformers",
"norwegian",
"GPT2",
"casual language modeling",
"license:cc-by-4.0"
] | text-generation | false | pere | null | pere/norwegian-gpt2 | 28 | null | transformers | 7,346 | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- GPT2
- casual language modeling
datasets:
- oscar
---
# Norwegian GPT-2 - Oscar
## Description
This is a sample reference model trained only on the Oscar Corpus for a day on a TPU v3-8. Pretrained model on Norwegian language using a causal language modeling (CLM) objective. |
phiyodr/bert-large-finetuned-squad2 | 5f4b89a4f92c4c975bb405548fc62869dc70f312 | 2021-05-20T02:36:12.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"en",
"dataset:squad2",
"arxiv:1810.04805",
"arxiv:1806.03822",
"transformers",
"autotrain_compatible"
] | question-answering | false | phiyodr | null | phiyodr/bert-large-finetuned-squad2 | 28 | null | transformers | 7,347 | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
metrics:
- exact
- f1
widget:
- text: "What discipline did Winkelmann create?"
context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art."
---
# bert-large-finetuned-squad2
## Model description
This model is based on **[bert-large-uncased](https://huggingface.co/bert-large-uncased)** and was finetuned on **[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/)**. The corresponding papers you can found [here (model)](https://arxiv.org/abs/1810.04805) and [here (data)](https://arxiv.org/abs/1806.03822).
## How to use
```python
from transformers.pipelines import pipeline
model_name = "phiyodr/bert-large-finetuned-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'What discipline did Winkelmann create?',
'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. '
}
nlp(inputs)
```
## Training procedure
```
{
"base_model": "bert-large-uncased",
"do_lower_case": True,
"learning_rate": 3e-5,
"num_train_epochs": 4,
"max_seq_length": 384,
"doc_stride": 128,
"max_query_length": 64,
"batch_size": 96
}
```
## Eval results
- Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
- Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md))
```
{
"exact": 76.22336393497852,
"f1": 79.72527570261339,
"total": 11873,
"HasAns_exact": 76.19770580296895,
"HasAns_f1": 83.21157193271408,
"HasAns_total": 5928,
"NoAns_exact": 76.24894869638352,
"NoAns_f1": 76.24894869638352,
"NoAns_total": 5945
}
```
|
ponmari/QuestionAnsweingBert | f501022178b31ab2063d14f88af895c0d7c5a127 | 2021-05-20T02:51:30.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ponmari | null | ponmari/QuestionAnsweingBert | 28 | null | transformers | 7,348 | Entry not found |
pradhyra/AWSBlogBert | 4ab2eb2471755ab5a1cbbd5d2767ba3efbae3b3a | 2021-05-20T19:30:09.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pradhyra | null | pradhyra/AWSBlogBert | 28 | null | transformers | 7,349 | This model is pre-trained on blog articles from AWS Blogs.
## Pre-training corpora
The input text contains around 3000 blog articles on [AWS Blogs website](https://aws.amazon.com/blogs/) technical subject matter including AWS products, tools and tutorials.
## Pre-training details
I picked a Roberta architecture for masked language modeling (6-layer, 768-hidden, 12-heads, 82M parameters) and its corresponding ByteLevelBPE tokenization strategy. I then followed HuggingFace's Transformers [blog post](https://huggingface.co/blog/how-to-train) to train the model.
I chose to follow the following training set-up: 28k training steps with batches of 64 sequences of length 512 with an initial learning rate 5e-5. The model acheived a training loss of 3.6 on the MLM task over 10 epochs.
|
pstroe/roberta-base-latin-cased | cfcf8c2ac7b0cc5c627a1dafa23e0ba8fc2eed16 | 2021-12-06T09:10:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | pstroe | null | pstroe/roberta-base-latin-cased | 28 | null | transformers | 7,350 | ## RoBERTa Latin model
This is a Latin RoBERTa-based LM model.
The data it uses is the same as has been used to compute the text referenced HTR evaluation measures.
The intention of the Transformer-based LM is twofold: on the one hand, it will be used for the evaluation of HTR results, on the other, it should be used as a decoder for the TrOCR architecture.
The basis for the word unigram and character n-gram computations is the Latin part of the [cc100 corpus](http://data.statmt.org/cc-100/).
The overall corpus contains 2.5G of text data.
### Preprocessing
I undertook the following preprocessing steps:
- Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
- Use of [CLTK](http://www.cltk.org) for sentence splitting and normalisation.
- Retaining only lines containing letters of the Latin alphabet, numerals, and certain punctuation (--> `grep -P '^[A-z0-9ÄÖÜäöüÆæŒœᵫĀāūōŌ.,;:?!\- Ęę]+$' la.nolorem.tok.txt`
- deduplication of the corpus
The result is a corpus of ~390 million tokens.
The dataset used to train this model is available [HERE](https://huggingface.co/datasets/pstroe/cc100-latin).
### Contact
For contact, reach out to Phillip Ströbel [via mail](mailto:[email protected]) or [via Twitter](https://twitter.com/CLingophil). |
sismetanin/rubert_conversational-ru-sentiment-rureviews | 6544d966bcbd5ab089438caf711e7394d8d1cdd2 | 2021-05-20T06:20:54.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
] | text-classification | false | sismetanin | null | sismetanin/rubert_conversational-ru-sentiment-rureviews | 28 | null | transformers | 7,351 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## RuBERT-Conversational-ru-sentiment-RuReviews
RuBERT-Conversational-ru-sentiment-RuReviews is a [RuBERT-Conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@INPROCEEDINGS{Smetanin2019Sentiment,
author={Sergey Smetanin and Michail Komarov},
booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)},
title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks},
year={2019},
volume={01},
pages={482-486},
doi={10.1109/CBI.2019.00062},
ISSN={2378-1963},
month={July}
}
``` |
sshleifer/student_cnn_12_6 | d266f51bbf3b5df7d91a640820a341770ae7ab45 | 2021-06-14T08:36:40.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sshleifer | null | sshleifer/student_cnn_12_6 | 28 | null | transformers | 7,352 | Entry not found |
textattack/albert-base-v2-rotten-tomatoes | b19b7cd9422f089722c7b7417e6a13ef2c2ac963 | 2020-07-06T16:35:34.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/albert-base-v2-rotten-tomatoes | 28 | null | transformers | 7,353 | ## TextAttack Model Card
This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8808630393996247, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
textattack/facebook-bart-large-QNLI | 00434bfe2b7e6a3bfcf34024f642ef519f0a0fe1 | 2020-06-09T16:50:26.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/facebook-bart-large-QNLI | 28 | null | transformers | 7,354 | Entry not found |
tog/fr-boris-8bit | 15191226465af9f10024d696f93f5090376ead9d | 2022-01-26T07:03:25.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | tog | null | tog/fr-boris-8bit | 28 | null | transformers | 7,355 | Entry not found |
uer/chinese_roberta_L-10_H-768 | 6ab354686e240c51eb0c6203ec67fa1b7f0fc41a | 2022-07-15T08:15:19.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-10_H-768 | 28 | 2 | transformers | 7,356 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
yhavinga/gpt-neo-125M-dutch | 863872afb9d8299aeb1b45ab7ef2cf2b0b248624 | 2022-03-20T10:21:20.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt_neo",
"text-generation",
"nl",
"dataset:yhavinga/mc4_nl_cleaned",
"transformers",
"gpt2-medium",
"gpt2"
] | text-generation | false | yhavinga | null | yhavinga/gpt-neo-125M-dutch | 28 | 1 | transformers | 7,357 | ---
language: nl
widget:
- text: "In het jaar 2030 zullen we"
- text: "Toen ik gisteren volledig in de ban was van"
- text: "Studenten en leraren van de Bogazici Universiteit in de Turkse stad Istanbul"
- text: "In Israël was een strenge lockdown"
tags:
- gpt2-medium
- gpt2
pipeline_tag: text-generation
datasets:
- yhavinga/mc4_nl_cleaned
---
# GPT-Neo 125M pre-trained on cleaned Dutch mC4 🇳🇱
A GPT-Neo small model (125M paramters) trained from scratch on Dutch, with perplexity 20.9 on cleaned Dutch mC4.
## How To Use
You can use this GPT-Neo model directly with a pipeline for text generation.
```python
MODEL_DIR='yhavinga/gpt-neo-125M-dutch'
from transformers import pipeline, GPT2Tokenizer, GPTNeoForCausalLM
tokenizer = GPT2Tokenizer.from_pretrained(MODEL_DIR)
model = GPTNeoForCausalLM.from_pretrained(MODEL_DIR)
generator = pipeline('text-generation', model, tokenizer=tokenizer)
generated_text = generator('Wetenschappers verbonden aan de Katholieke Universiteit', max_length=256, do_sample=True, top_k=50, top_p=0.95, temperature=0.7, no_repeat_ngram_size=2))
```
*"Wetenschappers verbonden aan de Katholieke Universiteit van Nijmegen" - "hebben ontdekt dat de genen die een mens heeft, een enorme invloed hebben op het DNA van zijn lichaam.
Cellen kunnen zich beter binden aan het DNA dan andere soorten cellen. De genen die de cellen maken, zijn bepalend voor de groei van de cel.
Het DNA van een mens is niet alleen informatiedrager, maar ook een bouwstof voor het DNA. Het wordt gevonden in de genen van een cel. Als er op een cel een cel"*
## Tokenizer
* BPE tokenizer trained from scratch for Dutch on mC4 nl cleaned with scripts from the Huggingface
Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
## Dataset
This model was trained on of the `full` configuration (33B tokens) of
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
## Models
TL;DR: [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) is the best model.
* The models with `a`/`b` in the step-column have been trained to step `a` of a total of `b` steps.
| | model | params | train seq len | ppl | loss | batch size | epochs | steps | optim | lr | duration | config |
|-----------------------------------------------------------------------------------|---------|--------|---------------|------|------|------------|--------|-----------------|-----------|--------|----------|-----------|
| [yhavinga/gpt-neo-125M-dutch](https://huggingface.co/yhavinga/gpt-neo-125M-dutch) | gpt neo | 125M | 512 | 20.9 | 3.04 | 128 | 1 | 190000/558608 | adam | 2.4e-3 | 1d 12h | full |
| [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) | gpt2 | 345M | 512 | 15.1 | 2.71 | 128 | 1 | 320000/520502 | adam | 8e-4 | 7d 2h | full |
| [yhavinga/gpt2-large-dutch](https://huggingface.co/yhavinga/gpt2-large-dutch) | gpt2 | 762M | 512 | 15.1 | 2.72 | 32 | 1 | 1100000/2082009 | adafactor | 3.3e-5 | 8d 15h | large |
| [yhavinga/gpt-neo-1.3B-dutch](https://huggingface.co/yhavinga/gpt-neo-1.3B-dutch) | gpt neo | 1.3B | 512 | 16.0 | 2.77 | 16 | 1 | 960000/3049896 | adafactor | 5e-4 | 7d 11h | full |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was also
instrumental in most, if not all, parts of the training. The following repositories where helpful in setting up the TPU-VM,
and training the models:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [HUggingFace Flax MLM examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling)
* [gpt2-medium-persian](https://huggingface.co/flax-community/gpt2-medium-persian)
* [gpt2-medium-indonesian](https://huggingface.co/flax-community/gpt2-medium-persian)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) |
vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated | 8e5cdfd34a1bce3bfcbc6d2a4c4294ddb2e32c34 | 2022-02-25T12:44:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | vocab-transformers | null | vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated | 28 | null | transformers | 7,358 | #cross_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated
This CrossEncoder was trained with MarginMSE loss from the [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated](https://hf.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated) checkpoint. **Word embedding matrix has been updated during training**.
You can load the model with [sentence-transformers](https://sbert.net):
```python
from sentence_transformers import CrossEncoder
from torch import nn
model = CrossEncoder(model_name, default_activation_function=nn.Identity())
```
Performance on TREC Deep Learning (nDCG@10):
- TREC-DL 19: 71.65
- TREC-DL 20: 73.6
|
jweb/japanese-soseki-gpt2-1b | de9952d7bd8ad68f4e37e84c919ffbf31f909bcc | 2022-03-06T02:17:59.000Z | [
"pytorch",
"rust",
"gpt2",
"text-generation",
"ja",
"dataset:cc100",
"dataset:wikipedia",
"dataset:AozoraBunko",
"transformers",
"japanese",
"lm",
"nlp",
"rust-bert",
"license:mit"
] | text-generation | false | jweb | null | jweb/japanese-soseki-gpt2-1b | 28 | 2 | transformers | 7,359 | ---
language: ja
thumbnail: https://github.com/ycat3/japanese-pretrained-models/blob/master/jweb.png
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
- rust
- rust-bert
license: mit
datasets:
- cc100
- wikipedia
- AozoraBunko
widget:
- text: "夏目漱石は、"
---
# japanese-soseki-gpt2-1b

This repository provides a 1.3B-parameter finetuned Japanese GPT2 model.
The model was finetuned by [jweb](https://jweb.asia/) based on trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/)
Both pytorch(pytorch_model.bin) and Rust(rust_model.ot) models are provided
# How to use the model
*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
python
~~~~
import torch
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("jweb/japanese-soseki-gpt2-1b")
model = AutoModelForCausalLM.from_pretrained("jweb/japanese-soseki-gpt2-1b")
if torch.cuda.is_available():
model = model.to("cuda")
text = "夏目漱石は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_length=128,
min_length=40,
do_sample=True,
repetition_penalty= 1.6,
early_stopping= True,
num_beams= 5,
temperature= 1.0,
top_k=500,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
# sample output: 夏目漱石は、明治時代を代表する文豪です。夏目漱石の代表作は「吾輩は猫である」や「坊っちゃん」、「草枕」「三四郎」、それに「虞美人草(ぐびじんそう)」などたくさんあります。
~~~~
rust
~~~~
use rust_bert::gpt2::GPT2Generator;
use rust_bert::pipelines::common::{ModelType, TokenizerOption};
use rust_bert::pipelines::generation_utils::{GenerateConfig, LanguageGenerator};
use rust_bert::resources::{ RemoteResource, ResourceProvider};
use tch::Device;
fn main() -> anyhow::Result<()> {
let model_resource = Box::new(RemoteResource {
url: "https://huggingface.co/jweb/japanese-soseki-gpt2-1b/resolve/main/rust_model.ot".into(),
cache_subdir: "japanese-soseki-gpt2-1b/model".into(),
});
let config_resource = Box::new(RemoteResource {
url: "https://huggingface.co/jweb/japanese-soseki-gpt2-1b/resolve/main/config.json".into(),
cache_subdir: "japanese-soseki-gpt2-1b/config".into(),
});
let vocab_resource = Box::new(RemoteResource {
url: "https://huggingface.co/jweb/japanese-soseki-gpt2-1b/resolve/main/spiece.model".into(),
cache_subdir: "japanese-soseki-gpt2-1b/vocab".into(),
});
let vocab_resource_token = vocab_resource.clone();
let merges_resource = vocab_resource.clone();
let generate_config = GenerateConfig {
model_resource,
config_resource,
vocab_resource,
merges_resource, // not used
device: Device::Cpu,
repetition_penalty: 1.6,
min_length: 40,
max_length: 128,
do_sample: true,
early_stopping: true,
num_beams: 5,
temperature: 1.0,
top_k: 500,
top_p: 0.95,
..Default::default()
};
let tokenizer = TokenizerOption::from_file(
ModelType::T5,
vocab_resource_token.get_local_path().unwrap().to_str().unwrap(),
None,
true,
None,
None,
)?;
let mut gpt2_model = GPT2Generator::new_with_tokenizer(generate_config, tokenizer.into())?;
gpt2_model.set_device(Device::cuda_if_available());
let input_text = "夏目漱石は、";
let t1 = std::time::Instant::now();
let output = gpt2_model.generate(Some(&[input_text]), None);
println!("{}", output[0].text);
println!("Elapsed Time(ms):{}",t1.elapsed().as_millis());
Ok(())
}
// sample output: 夏目漱石は、明治から大正にかけて活躍した日本の小説家です。彼は「吾輩は猫である」や「坊っちゃん」、「草枕」「三四郎」、あるいは「虞美人草」などの小説で知られていますが、「明暗」のような小説も書いていました。
~~~~
# Model architecture
A 24-layer, 2048-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data.
# Finetuning
The model was finetuned on [Aozorabunko](https://github.com/aozorabunko/aozorabunko), especially Natume Soseki books.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols.
# Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
hackathon-pln-es/poem-gen-spanish-t5-small | 7ef2e7df3e03360866d2e394dc3e5d36fb3e0e67 | 2022-04-03T03:30:07.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"es",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | hackathon-pln-es | null | hackathon-pln-es/poem-gen-spanish-t5-small | 28 | 4 | transformers | 7,360 | ---
license: mit
language: es
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small
results: []
---
# poem-gen-spanish-t5-small
This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the [Spanish Poetry Dataset](https://www.kaggle.com/andreamorgar/spanish-poetry-dataset/version/1) dataset.
The model was created during the [First Spanish Hackathon](https://somosnlp.org/hackathon) organized by [Somos NLP](https://somosnlp.org/).
The team who participated was composed by:
- 🇨🇺 [Alberto Carmona Barthelemy](https://huggingface.co/milyiyo)
- 🇨🇴 [Jorge Henao](https://huggingface.co/jorge-henao)
- 🇪🇸 [Andrea Morales Garzón](https://huggingface.co/andreamorgar)
- 🇮🇳 [Drishti Sharma](https://huggingface.co/DrishtiSharma)
It achieves the following results on the evaluation set:
- Loss: 2.8707
- Perplexity: 17.65
## Model description
The model was trained to generate spanish poems attending to some parameters like style, sentiment, words to include and starting phrase.
Example:
```
poema:
estilo: Pablo Neruda &&
sentimiento: positivo &&
palabras: cielo, luna, mar &&
texto: Todos fueron a verle pasar
```
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'hackathon-pln-es/poem-gen-spanish-t5-small'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
author, sentiment, word, start_text = 'Pablo Neruda', 'positivo', 'cielo', 'Todos fueron a la plaza'
input_text = f"""poema: estilo: {author} && sentimiento: {sentiment} && palabras: {word} && texto: {start_text} """
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"],
do_sample = True,
max_length = 30,
repetition_penalty = 20.0,
top_k = 50,
top_p = 0.92)
detok_outputs = [tokenizer.decode(x, skip_special_tokens=True) for x in outputs]
res = detok_outputs[0]
```
## Training and evaluation data
The original [dataset](https://www.kaggle.com/andreamorgar/spanish-poetry-dataset/version/1) has the columns `author`, `content` and `title`.
For each poem we generate new examples:
- content: *line_i* , generated: *line_i+1*
- content: *concatenate(line_i, line_i+1)* , generated: *line_i+2*
- content: *concatenate(line_i, line_i+1, line_i+2)* , generated: *line_i+3*
The resulting dataset has the columns `author`, `content`, `title` and `generated`.
For each example we compute the sentiment of the generated column and the nouns. In the case of sentiment, we used the model `mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis` and for nouns extraction we used spaCy.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7082 | 0.73 | 30000 | 2.8878 |
| 2.6251 | 1.46 | 60000 | 2.8940 |
| 2.5796 | 2.19 | 90000 | 2.8853 |
| 2.5556 | 2.93 | 120000 | 2.8749 |
| 2.527 | 3.66 | 150000 | 2.8850 |
| 2.5024 | 4.39 | 180000 | 2.8760 |
| 2.4887 | 5.12 | 210000 | 2.8749 |
| 2.4808 | 5.85 | 240000 | 2.8707 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hackathon-pln-es/twitter_sexismo-finetuned-exist2021-metwo | abb1735fa8735c95c2241b7f074af79592181a42 | 2022-05-18T08:48:34.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:EXIST Dataset",
"dataset:MeTwo Machismo and Sexism Twitter Identification dataset",
"transformers",
"license:apache-2.0",
"model-index"
] | text-classification | false | hackathon-pln-es | null | hackathon-pln-es/twitter_sexismo-finetuned-exist2021-metwo | 28 | 4 | transformers | 7,361 | ---
license: apache-2.0
tags:
-
datasets:
- EXIST Dataset
- MeTwo Machismo and Sexism Twitter Identification dataset
widget:
- text: "manejas muy bien para ser mujer"
- text: "En temas políticos hombres y mujeres son iguales"
- text: "Los ipad son unos equipos electrónicos"
metrics:
- accuracy
model-index:
- name: twitter_sexismo-finetuned-exist2021
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: EXIST Dataset
type: EXIST Dataset
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
# twitter_sexismo-finetuned-exist2021
This model is a fine-tuned version of [pysentimiento/robertuito-hate-speech](https://huggingface.co/pysentimiento/robertuito-hate-speech) on the EXIST dataset and MeTwo: Machismo and Sexism Twitter Identification dataset https://github.com/franciscorodriguez92/MeTwo.
It achieves the following results on the evaluation set:
- Loss: 0.54
- Accuracy: 0.83
## Model description
Model for the 'Somos NLP' Hackathon for detecting sexism in twitters in Spanish. Created by:
- **medardodt**
- **MariaIsabel**
- **ManRo**
- **lucel172**
- **robertou2**
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- my_learning_rate = 5E-5
- my_adam_epsilon = 1E-8
- my_number_of_epochs = 8
- my_warmup = 3
- my_mini_batch_size = 32
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
|Epoch|Training Loss|Validation Loss|Accuracy|F1|Precision||
|----|-------|-------|-------|-------|-------|-------|
|1|0.389900 |0.397857 |0.827133 |0.699620 |0.786325 |
|2|0.064400 |0.544625 |0.831510 |0.707224 |0.794872 |
|3|0.004800 |0.837723 |0.818381 |0.704626 |0.733333 |
|4|0.000500 |1.045066 |0.820569 | 0.702899 |0.746154 |
|5|0.000200 |1.172727 |0.805252 |0.669145 |0.731707 |
|6|0.000200 |1.202422 |0.827133 |0.720848 |0.744526 |
|7|0.000000 |1.195012 |0.827133 |0.718861 |0.748148 |
|8|0.000100 |1.215515 |0.824945 |0.705882 |0.761905 |
|9|0.000100|1.233099 |0.827133 |0.710623 |0.763780 |
|10|0.000100|1.237268 |0.829322 |0.713235 |0.769841 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
## Model in Action
Fast usage with pipelines:
``` python
###libraries required
!pip install transformers
from transformers import pipeline
### usage pipelines
model_checkpoint = "hackathon-pln-es/twitter_sexismo-finetuned-exist2021-metwo"
pipeline_nlp = pipeline("text-classification", model=model_checkpoint)
pipeline_nlp("mujer al volante peligro!")
#pipeline_nlp("¡me encanta el ipad!")
#pipeline_nlp (["mujer al volante peligro!", "Los hombre tienen más manias que las mujeres", "me encanta el ipad!"] )
# OUTPUT MODEL #
# LABEL_0: "NON SEXISM"or LABEL_1: "SEXISM" and score: probability of accuracy per model.
# [{'label': 'LABEL_1', 'score': 0.9967633485794067}]
# [{'label': 'LABEL_0', 'score': 0.9934417009353638}]
#[{‘label': 'LABEL_1', 'score': 0.9967633485794067},
# {'label': 'LABEL_1', 'score': 0.9755664467811584},
# {'label': 'LABEL_0', 'score': 0.9955045580863953}]
```
## More Information Process
### Retos
Uno de los principales retos que se encontró en este proceso ha sido disponer de un dataset en español. Se ha logrado conseguir (previa solicitud) el dataset utilizado en [EXIST:sEXism Identification in Social neTworks](http://nlp.uned.es/exist2021/), el cual fue un gran punto de partida para comenzar con el modelo. Lamentablemente este un dataset presenta limitaciones debido a licencias y políticas para ser compartido libremente.
Este dataset incorpora cualquier tipo de expresión sexista o fenómenos relacionados, incluidas las afirmaciones descriptivas o informadas donde el mensaje sexista es un informe o una descripción de un comportamiento sexista. se han utilizado los 3,541 tweets etiquetados en español. Luego se logró disponer de otro dataset en español [MeTwo: Machismo and Sexism Twitter Identification dataset](https://github.com/franciscorodriguez92/MeTwo). Este dataset contiene los id de cada tweet con su etiqueta respectiva, lo que nos permitió obtener el texto del tweet e incrementar el dataset original.
Un desafío ha sido iniciar los procesos de finetuned en las prueba, esto pues se dispone de diversas variables para validar y testear (desde modelos como: BETO o Roberta, hasta hiperparámetros: como learning rate), y solo se disponede un plazo acotado de dos semanas, además de la curva de aprendizaje. Para este desafío, se han basado las primeras pruebas en los parámetros presentados por de Paula et al. (2021), lo cual brindó un punto de partida y un reto a vencer, el **_0.790 de accuracy_** obtenidos por el trabajo previo en la identificación de tweets sexistas en español.
En este ámbito se realizaron diversas pruebas en paralelo para encontrar el mejor modelo. Luego de un proceso colaborativo de finetuned se ha logrado obtener un **83% de accuracy**.
### Trabajos Futuros
Se propone incrementar el dataset desarrollado. Para esto es posible descargar cantidades superiores de tweets en español y aplicar técnicas de active learning para obtener un grupo reducido de tweets a etiquetar vía crowdsourcing, y en donde estos datos etiquetados puedan servir para etiquetar el resto. También se pueden utilizar técnicas de Data Augmentation, para duplicar y extender el dataset. Realizar más pruebas con otros modelos y mejorar el modelo es otro reto que se propone como trabajos futuros.
### Posibles Aplicaciones
Primero es sumamente importante dar mayor visibilidad al problema de _sexismo en redes sociales_, principalmente en español. El proceso de Transfer Learning logra reutilizar y aprovechar modelos previamente entrenados, y lo que se desea es que nuevos grupos de investigación, estudiantes, etc. utilicen la base del actual modelo para desarrollar los propios y crear un mejor modelo. De esta manera, se podría construir una herramienta que pueda identificar en tiempo real los tweets sexistas y eliminarlos antes de su propagación.
### Referencias
1 de Paula, A. F. M., da Silva, R. F., & Schlicht, I. B. (2021). Sexism Prediction in Spanish and English Tweets Using Monolingual and Multilingual BERT and Ensemble Models. arXiv preprint arXiv:2111.04551.
Rodríguez-Sánchez, F., Carrillo-de-Albornoz, J., Plaza, L., Gonzalo, J., Rosso, P., Comet, M., & Donoso, T. (2021). Overview of exist 2021: sexism identification in social networks. Procesamiento del Lenguaje Natural, 67, 195-207. |
huggingtweets/twitter | 5926a658115bb8084687850aaa93f7948b80d104 | 2022-03-21T13:19:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/twitter | 28 | 1 | transformers | 7,362 | ---
language: en
thumbnail: http://www.huggingtweets.com/twitter/1647868756403/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488548719062654976/u6qfBBkF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Twitter</div>
<div style="text-align: center; font-size: 14px;">@twitter</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Twitter.
| Data | Twitter |
| --- | --- |
| Tweets downloaded | 3194 |
| Retweets | 45 |
| Short tweets | 625 |
| Tweets kept | 2524 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3jvba1eq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @twitter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mzlt8tly) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mzlt8tly/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/twitter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Graphcore/gpt2-medium-wikitext-103 | 31d8e00b7ba45dd878a827682915499075720c88 | 2022-05-25T18:14:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"dataset:wikitext",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Graphcore | null | Graphcore/gpt2-medium-wikitext-103 | 28 | null | transformers | 7,363 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: clm_output_medium
results: []
---
# Graphcore/gpt2-medium-wikitext-103
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation.
Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the [wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6973
## Training and evaluation data
Trained on wikipedia dataset:
- [HuggingFace/wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/language-modeling/run_clm.py \
--model_name_or_path gpt2-medium \
--ipu_config_name Graphcore/gpt2-medium-ipu \
--dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1 \
--do_train \
--do_eval \
--num_train_epochs 10 \
--dataloader_num_workers 64 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 256 \
--output_dir /tmp/clm_output_medium \
--logging_steps 5 \
--learning_rate 1e-5 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.1 \
--ipu_config_overrides="embedding_serialization_factor=5,inference_device_iterations=9,replication_factor=2,inference_replication_factor=2,ipus_per_replica=8,layers_per_ipu=[0 3 3 3 3 4 4 4],matmul_proportion=0.25" \
--dataloader_drop_last \
--pod_type pod16
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- total_eval_batch_size: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
"epoch": 10.0,
"train_loss": 2.8070910754504506,
"train_runtime": 11217.8167,
"train_samples": 114248,
"train_samples_per_second": 101.845,
"train_steps_per_second": 0.099
***** eval metrics *****
"eval_loss": 2.697265625,
"eval_samples": 240,
"perplexity": 14.83910053420958
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Hieu/nft_label | 608833128e19b254f93934cd7583c25486051b83 | 2022-04-03T07:54:51.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Hieu | null | Hieu/nft_label | 28 | null | transformers | 7,364 | Entry not found |
enimai/OPUS-mt-en-fr-finetuned-MUST-C | 8f20e7c89a406842044245e660e0aaaea812e491 | 2022-04-04T11:49:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | enimai | null | enimai/OPUS-mt-en-fr-finetuned-MUST-C | 28 | null | transformers | 7,365 | ---
license: apache-2.0
---
|
Raychanan/Longformer_Conflict | 0f40ee73ab18680bc5d63ce24f7aa74f1cbcb93b | 2022-04-16T15:30:07.000Z | [
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"transformers"
] | text-classification | false | Raychanan | null | Raychanan/Longformer_Conflict | 28 | null | transformers | 7,366 | training_args = TrainingArguments(
output_dir="./results",
learning_rate=5e-5,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=5,
weight_decay=0.01,
evaluation_strategy="epoch",
push_to_hub=True
) |
engmatic-earth/mt5-zh-ja-en-trimmed-fine-tuned-v1 | 084211407f80d39ccc9e8c75817a8b77f1bbccae | 2022-04-17T11:00:50.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | engmatic-earth | null | engmatic-earth/mt5-zh-ja-en-trimmed-fine-tuned-v1 | 28 | null | transformers | 7,367 | ---
license: cc-by-nc-sa-4.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-zh-ja-en-trimmed-fine-tuned-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-zh-ja-en-trimmed-fine-tuned-v1
This model is a fine-tuned version of [K024/mt5-zh-ja-en-trimmed](https://huggingface.co/K024/mt5-zh-ja-en-trimmed) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0225
- Bleu: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
athairus/m2m100_418M_finetuned_litk_es_en | 224f5caa9f9d7dc207dc96f636b595b26bdb49f6 | 2022-04-17T19:30:02.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | athairus | null | athairus/m2m100_418M_finetuned_litk_es_en | 28 | null | transformers | 7,368 | Entry not found |
xfbai/AMRBART-large-finetuned-AMR2.0-AMRParsing | 2d1fc8be6bfb98266e944718a0ad61743707aaef | 2022-04-26T05:51:03.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2203.07836",
"transformers",
"AMRBART",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | xfbai | null | xfbai/AMRBART-large-finetuned-AMR2.0-AMRParsing | 28 | 1 | transformers | 7,369 | ---
language: en
tags:
- AMRBART
license: mit
---
## AMRBART-large-finetuned-AMR2.0-AMRParsing
This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR2.0 dataset. It achieves a Smatch of 85.4 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022.
## Model description
Same with AMRBART.
## Training data
The model is finetuned on [AMR2.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 36,521
training instances, 1,368 validation instances, and 1,371 test instances.
## Intended uses & limitations
You can use the model for AMR parsing, but it's mostly intended to be used in the domain of News.
## How to use
Here is how to initialize this model in PyTorch:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR2.0-AMRParsing")
```
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
## BibTeX entry and citation info
Please cite this paper if you find this model helpful
```bibtex
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}
``` |
CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_77 | ca8a529c4808f6f60d9b6c451fb43bb157cc305e | 2022-05-11T01:40:16.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_77 | 28 | null | transformers | 7,370 | Entry not found |
CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_88 | 4bf0792fca5708c74d1f6557a4ea8d4d88837ad7 | 2022-05-11T02:32:22.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_88 | 28 | null | transformers | 7,371 | Entry not found |
dragonSwing/vibert-capu | 5859a7baaa9ed2d6f7923e54f570b0ff94b566c5 | 2022-05-17T15:02:30.000Z | [
"pytorch",
"bert",
"vi",
"dataset:oscar-corpus/OSCAR-2109",
"transformers",
"capitalization",
"punctuation",
"token-classification",
"license:cc-by-sa-4.0"
] | token-classification | false | dragonSwing | null | dragonSwing/vibert-capu | 28 | null | transformers | 7,372 | ---
language:
- vi
tags:
- capitalization
- punctuation
- token-classification
license: cc-by-sa-4.0
datasets:
- oscar-corpus/OSCAR-2109
metrics:
- accuracy
- precision
- recall
- f1
---
# ✨ vibert-capitalization-punctuation
This a [viBERT](https://huggingface.co/FPTAI/vibert-base-cased) model finetuned for punctuation restoration on the [OSCAR-2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109) dataset.
The model predicts the punctuation and upper-casing of plain, lower-cased text. An example use case can be ASR output. Or other cases when text has lost punctuation.
This model is intended for direct use as a punctuation restoration model for the general Vietnamese language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks.
Model restores the following punctuations -- **[. , : ? ]**
The model also restores the complex upper-casing of words like *YouTube*, *MobiFone*.
-----------------------------------------------
## 🚋 Usage
**Below is a quick way to get up and running with the model.**
1. Download files from hub
```python
import os
import shutil
import sys
from huggingface_hub import snapshot_download
cache_dir = "./capu"
def download_files(repo_id, cache_dir=None, ignore_regex=None):
download_dir = snapshot_download(repo_id=repo_id, cache_dir=cache_dir, ignore_regex=ignore_regex)
if cache_dir is None or download_dir == cache_dir:
return download_dir
file_names = os.listdir(download_dir)
for file_name in file_names:
shutil.move(os.path.join(download_dir, file_name), cache_dir)
os.rmdir(download_dir)
return cache_dir
cache_dir = download_files(repo_id="dragonSwing/vibert-capu", cache_dir=cache_dir, ignore_regex=["*.json", "*.bin"])
sys.path.append(cache_dir)
```
2. Sample python code
```python
import os
from gec_model import GecBERTModel
model = GecBERTModel(
vocab_path=os.path.join(cache_dir, "vocabulary"),
model_paths="dragonSwing/vibert-capu",
split_chunk=True
)
model("theo đó thủ tướng dự kiến tiếp bộ trưởng nông nghiệp mỹ tom wilsack bộ trưởng thương mại mỹ gina raimondo bộ trưởng tài chính janet yellen gặp gỡ thượng nghị sĩ patrick leahy và một số nghị sĩ mỹ khác")
# Always return list of outputs.
# ['Theo đó, Thủ tướng dự kiến tiếp Bộ trưởng Nông nghiệp Mỹ Tom Wilsack, Bộ trưởng Thương mại Mỹ Gina Raimondo, Bộ trưởng Tài chính Janet Yellen, gặp gỡ Thượng nghị sĩ Patrick Leahy và một số nghị sĩ Mỹ khác.']
model("những gói cước năm g mobifone sẽ mang đến cho bạn những trải nghiệm mới lạ trên cả tuyệt vời so với mạng bốn g thì tốc độ truy cập mạng 5 g mobifone được nhận định là siêu đỉnh với mức truy cập nhanh gấp 10 lần")
# ['Những gói cước 5G MobiFone sẽ mang đến cho bạn những trải nghiệm mới lạ trên cả tuyệt vời. So với mạng 4G thì tốc độ truy cập mạng 5G MobiFone được nhận định là siêu đỉnh với mức truy cập nhanh gấp 10 lần.']
```
**This model can work on arbitrarily large text in Vietnamese language.**
-----------------------------------------------
## 📡 Training data
Here is the number of product reviews we used for fine-tuning the model:
| Language | Number of text samples |
| --- | --- |
| Vietnamese | 5,600,000 |
-----------------------------------------------
## 🎯 Accuracy
Below is a breakdown of the performance of the model by each label on 10,000 held-out text samples:
| label | precision | recall | f1-score | support |
| --- | --- | --- | --- | --- |
| **Upper** | 0.88 | 0.89 | 0.89 | 56497 |
| **Complex-Upper** | 0.92 | 0.83 | 0.88 | 480 |
| **.** | 0.81 | 0.82 | 0.82 | 18139 |
| **,** | 0.73 | 0.70 | 0.71 | 22961 |
| **:** | 0.74 | 0.56 | 0.64 | 1432 |
| **?** | 0.80 | 0.76 | 0.78 | 1730 |
| **none** | 0.99 | 0.99 | 0.99 |475611 |
-----------------------------------------------
|
ncfrey/ChemGPT-1.2B | 0164ca1f1754cd36b43c34b185373ee3672e7d65 | 2022-06-15T15:44:24.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"chemistry"
] | text-generation | false | ncfrey | null | ncfrey/ChemGPT-1.2B | 28 | 1 | transformers | 7,373 | ---
tags:
- chemistry
---
# ChemGPT 1.2B
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
``` |
VMware/vbert-2021-base | dcdbfcad3a2d87d1083659cab9ff44abc2cb1661 | 2022-06-16T22:27:04.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"eng",
"transformers",
"tensorflow",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | VMware | null | VMware/vbert-2021-base | 28 | 2 | transformers | 7,374 | ---
language:
- "eng"
thumbnail: "url to a thumbnail used in social sharing"
tags:
- "pytorch"
- "tensorflow"
license: "apache-2.0"
---
# vBERT-2021-BASE
### Model Info:
<ul>
<li> Authors: R&D AI Lab, VMware Inc.
<li> Model date: April, 2022
<li> Model version: 2021-base
<li> Model type: Pretrained language model
<li> License: Apache 2.0
</ul>
#### Motivation
Traditional BERT models struggle with VMware-specific words (Tanzu, vSphere, etc.), technical terms, and compound words. (<a href =https://medium.com/@rickbattle/weaknesses-of-wordpiece-tokenization-eb20e37fec99>Weaknesses of WordPiece Tokenization</a>)
We have created our vBERT model to address the aforementioned issues. We have replaced the first 1k unused tokens of BERT's vocabulary with VMware-specific terms to create a modified vocabulary. We then pretrained the 'bert-base-uncased' model for additional 78K steps (71k With MSL_128 and 7k with MSL_512) (approximately 5 epochs) on VMware domain data.
#### Intended Use
The model functions as a VMware-specific Language Model.
#### How to Use
Here is how to use this model to get the features of a given text in PyTorch:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('VMware/vbert-2021-base')
model = BertModel.from_pretrained("VMware/vbert-2021-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('VMware/vbert-2021-base')
model = TFBertModel.from_pretrained('VMware/vbert-2021-base')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Training
#### - Datasets
Publically available VMware text data such as VMware Docs, Blogs etc. were used for creating the pretraining corpus. Sourced in May, 2021. (~320,000 Documents)
#### - Preprocessing
<ul>
<li>Decoding HTML
<li>Decoding Unicode
<li>Stripping repeated characters
<li>Splitting compound word
<li>Spelling correction
</ul>
#### - Model performance measures
We benchmarked vBERT on various VMware-specific NLP downstream tasks (IR, classification, etc).
The model scored higher than the 'bert-base-uncased' model on all benchmarks.
### Limitations and bias
Since the model is further pretrained on the BERT model, it may have the same biases embedded within the original BERT model.
The data needs to be preprocessed using our internal vNLP Preprocessor (not available to the public) to maximize its performance.
|
anuj55/paraphrase-mpnet-base-v2-finetuned-polifact | b7669ce7827236d430928c4bc1d0ca7a11eeb70a | 2022-05-16T20:56:23.000Z | [
"pytorch",
"tensorboard",
"mpnet",
"text-classification",
"transformers"
] | text-classification | false | anuj55 | null | anuj55/paraphrase-mpnet-base-v2-finetuned-polifact | 28 | null | transformers | 7,375 | Entry not found |
sabersol/bert-base-uncased-emotion | f80494fad3c80a265ce6fde5816f2c4549067e2d | 2022-05-27T03:25:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:cc-by-nc-sa-4.0"
] | text-classification | false | sabersol | null | sabersol/bert-base-uncased-emotion | 28 | null | transformers | 7,376 | ---
license: cc-by-nc-sa-4.0
---
# CITDA:
Fine-tuned `bert-base-uncased` on the `emotions` dataset
Demo Notebook: https://colab.research.google.com/drive/10ZCFvlf2UV3FjU4ymf4OoipQvqHbIItG?usp=sharing
## Packages
- Install `torch`
- Also, `pip install transformers datasets scikit-learn wandb seaborn python-dotenv`
## Train
1. Rename `.env.example` to `.env` and set an API key from [wandb](https://wandb.ai/authorize)
2. You can adjust model parameters in the `explainableai.py` file.
2. The model (`pytorch_model.bin`) is a based on the `bert-base-uncased` and already trained on the `emotions` dataset.
To re-produce the training run `finetune-emotions.py`. You can change the base model, or the dataset by changing that file's code.
## Example
Run `example.py`
## Train
The model is already trained on `bert-base-uncased` with the [emotions dataset](https://huggingface.co/datasets/emotion). However, you can change parameters and re-fine-tune the model by running `finetune-emotions.py`. |
north/t5_xxl_NCC_lm | eb2f5bf7a7b0f25372eb9ab0f1968d34240589b8 | 2022-06-01T19:42:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | north | null | north/t5_xxl_NCC_lm | 28 | null | transformers | 7,377 | ---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
-T5
The North-T5-models are a set of Norwegian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|✔||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xxl/norwegian_NCC_plus_English_pluss100k_lm_t5x_xxl/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
tursunali/bpt2 | 9ca6484cc1837f3c3ec5fec57dcb4b9ddf37e246 | 2022-05-24T04:12:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"de",
"transformers"
] | text-generation | false | tursunali | null | tursunali/bpt2 | 28 | null | transformers | 7,378 | ---
language: de
widget:
- text: "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."
---
# BPT2
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("tursunali/bpt2")
model = AutoModelForCausalLM.from_pretrained("tursunali/bpt2")
prompt = "<your prompt>"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```
Also, two tricks might improve the generated text:
```python
output = model.generate(
# during training an EOS token was used to mark the beginning of each text
# so it can help to insert it at the start
torch.tensor(
[tokenizer.eos_token_id] + tokenizer.encode(prompt)
).unsqueeze(0),
do_sample=True,
# try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
# prone to ending generation early because a significant number of texts from the training corpus
# is quite short
bad_words_ids=[[0]],
max_length=max_length,
)[0]
print(tokenizer.decode(output))
```
## Citing
Please cite BPT2 as follows:
```
@misc{Backpacker_Trail_German_large_2022,
author = {BackpackerTrail, Tursunali Kholdorov},
title = {{BPT2: Backpacker Trail German versions of BPT2}},
url = {https://github.com/Tursunali-Kholdorov/bptTrainer},
year = {2022}
}
```
|
sharif-dal/dal-bert | ff8842a0a2f50f5c07343d37acf02dd29ccef19b | 2022-07-02T10:18:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"fa",
"arxiv:1810.04805",
"transformers",
"bert-fa",
"bert-persian",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | sharif-dal | null | sharif-dal/dal-bert | 28 | 2 | transformers | 7,379 | ---
license: apache-2.0
language: fa
widget:
- text: "از هر دستی بگیری از همون [MASK] میدی"
- text: "این آخرین باره بهت [MASK] میگم"
- text: 'چرا آن جوان بیچاره را به سخره [MASK]'
- text: 'آخه محسن [MASK] هم شد خواننده؟'
- text: 'پسر عجب [MASK] زد'
tags:
- bert-fa
- bert-persian
model-index:
- name: dal-bert
results: []
---
DAL-BERT: Another pre-trained language model for Persian
---
DAL-BERT is a transformer-based model trained on more than 80 gigabytes of Persian text including both formal and informal (conversational) contexts. The architecture of this model follows the original BERT [[Devlin et al.](https://arxiv.org/abs/1810.04805)].
How to use the Model
---
```python
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained('sharif-dal/dal-bert')
tokenizer = BertTokenizer.from_pretrained('sharif-dal/dal-bert')
fill_sentence = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_sentence('اینجا جمله مورد نظر خود را بنویسید و کلمه موردنظر را [MASK] کنید')
```
The Training Data
---
The abovementioned model was trained on a bunch of newspapers, news agencies' websites, technology-related sources, people's comments, magazines, literary criticism, and some blogs.
Evaluation
---
| Training Loss | Epoch | Step |
|:-------------:|:-----:|:-----:|
| 2.1855 | 13 | 7649486 |
Contributors
---
- [Arman Malekzadeh](http://ce.sharif.edu/~malekzaadeh/), PhD Student in AI @ Sharif University of Technology [[Linkedin](https://www.linkedin.com/in/arman-malekzadeh/)] [[Github](https://github.com/arm-on)]
- Amirhossein Ramazani, Master's Student in AI @ Sharif University of Technology [[Linkedin](https://www.linkedin.com/in/amirhossein-ramazani/)] [[Github](https://github.com/amirhossein1376)]
|
finiteautomata/ner-leg | d857431dae1ae112d4dcdb6533c55f6e256d8eec | 2022-06-22T13:45:53.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | finiteautomata | null | finiteautomata/ner-leg | 28 | null | transformers | 7,380 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: ner-leg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-leg
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0656
- Precision: 0.8873
- Recall: 0.8565
- Macro F1: 0.8716
- Micro F1: 0.8716
- Accuracy: 0.9786
- Marker F1: 0.8993
- Marker Precision: 0.8701
- Marker Recall: 0.9306
- Reference F1: 0.9558
- Reference Precision: 0.9474
- Reference Recall: 0.9643
- Term F1: 0.8621
- Term Precision: 0.8843
- Term Recall: 0.8410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Macro F1 | Micro F1 | Accuracy | Marker F1 | Marker Precision | Marker Recall | Reference F1 | Reference Precision | Reference Recall | Term F1 | Term Precision | Term Recall |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:|:--------:|:---------:|:----------------:|:-------------:|:------------:|:-------------------:|:----------------:|:-------:|:--------------:|:-----------:|
| No log | 1.0 | 100 | 0.1980 | 0.5457 | 0.7472 | 0.6307 | 0.6307 | 0.9212 | 0.8557 | 0.7748 | 0.9556 | 0.8249 | 0.7228 | 0.9605 | 0.5782 | 0.4966 | 0.6919 |
| No log | 2.0 | 200 | 0.1972 | 0.5982 | 0.8348 | 0.6970 | 0.6970 | 0.9253 | 0.8683 | 0.7739 | 0.9889 | 0.8503 | 0.7802 | 0.9342 | 0.6576 | 0.5578 | 0.8009 |
| No log | 3.0 | 300 | 0.2805 | 0.6085 | 0.8248 | 0.7003 | 0.7003 | 0.9244 | 0.8279 | 0.712 | 0.9889 | 0.8118 | 0.7340 | 0.9079 | 0.6693 | 0.5799 | 0.7915 |
| No log | 4.0 | 400 | 0.2992 | 0.5858 | 0.8461 | 0.6923 | 0.6923 | 0.9234 | 0.8381 | 0.7333 | 0.9778 | 0.8488 | 0.7604 | 0.9605 | 0.6556 | 0.5490 | 0.8136 |
| 0.1651 | 5.0 | 500 | 0.3312 | 0.5965 | 0.8048 | 0.6851 | 0.6851 | 0.9246 | 0.8476 | 0.7417 | 0.9889 | 0.8521 | 0.7742 | 0.9474 | 0.6435 | 0.5572 | 0.7615 |
| 0.1651 | 6.0 | 600 | 0.3450 | 0.6161 | 0.8335 | 0.7085 | 0.7085 | 0.9281 | 0.8911 | 0.8036 | 1.0 | 0.8757 | 0.7957 | 0.9737 | 0.6653 | 0.5731 | 0.7930 |
| 0.1651 | 7.0 | 700 | 0.3504 | 0.6263 | 0.8223 | 0.7110 | 0.7110 | 0.9284 | 0.8955 | 0.8108 | 1.0 | 0.8824 | 0.7979 | 0.9868 | 0.6662 | 0.5829 | 0.7773 |
| 0.1651 | 8.0 | 800 | 0.3739 | 0.6190 | 0.8173 | 0.7044 | 0.7044 | 0.9280 | 0.8955 | 0.8108 | 1.0 | 0.8876 | 0.8065 | 0.9868 | 0.6577 | 0.5734 | 0.7709 |
| 0.1651 | 9.0 | 900 | 0.4025 | 0.6164 | 0.8185 | 0.7032 | 0.7032 | 0.9284 | 0.89 | 0.8091 | 0.9889 | 0.8876 | 0.8065 | 0.9868 | 0.6573 | 0.5711 | 0.7741 |
| 0.0104 | 10.0 | 1000 | 0.3988 | 0.6240 | 0.8185 | 0.7082 | 0.7082 | 0.9285 | 0.8955 | 0.8108 | 1.0 | 0.8876 | 0.8065 | 0.9868 | 0.6622 | 0.5794 | 0.7725 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RUCAIBox/mvp-multi-task | 2462b3c380379dbdc99dfb058f51e35f084b29e1 | 2022-06-27T02:27:55.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"summarization",
"conversational",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mvp-multi-task | 28 | null | transformers | 7,381 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- summarization
- conversational
pipeline_tag: text2text-generation
widget:
- text: "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons."
example_title: "Summarization"
- text: "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?"
example_title: "Dialog"
- text: "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man"
example_title: "Data-to-text"
- text: "Given the story title: I think all public schools should have a uniform dress code."
example_title: "Story Generation"
- text: "Answer the following question: From which country did Angola achieve independence in 1975?"
example_title: "Question Answering"
- text: "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing ."
example_title: "Question Generaion"
---
# MVP-multi-task
The MVP-multi-task model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-multi-task is a prompt-based model that MVP is further equipped with prompts pre-trained using a mixture of labeled datasets. It is a variant (MVP+M) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering.
## Example
For summarization:
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-multi-task")
>>> inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"]
```
For data-to-text generation:
```python
>>> from transformers import MvpTokenizerFast, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-multi-task")
>>> inputs = tokenizer(
... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
vaibhavagg303/T5-test | edbff64988ee98cf29620c17a7b09eb60f808b81 | 2022-06-08T04:48:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vaibhavagg303 | null | vaibhavagg303/T5-test | 28 | null | transformers | 7,382 | Entry not found |
edmundhui/mental_health_trainer | 0ea11c1763041dbbb02eff39e023a4b0b58e5ceb | 2022-06-21T20:41:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | edmundhui | null | edmundhui/mental_health_trainer | 28 | null | transformers | 7,383 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mental_health_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental_health_trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [reddit_mental_health_posts](https://huggingface.co/datasets/solomonk/reddit_mental_health_posts)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
santiviquez/ssr-base-finetuned-samsum-en | 61500501e1d281dbbdb20fb598a19dc4fd94fb36 | 2022-06-27T20:54:46.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:samsum",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | santiviquez | null | santiviquez/ssr-base-finetuned-samsum-en | 28 | null | transformers | 7,384 | ---
tags:
- summarization
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: ssr-base-finetuned-samsum-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 46.7505
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 46.2529
verified: true
- name: ROUGE-2
type: rouge
value: 21.3374
verified: true
- name: ROUGE-L
type: rouge
value: 36.1939
verified: true
- name: ROUGE-LSUM
type: rouge
value: 42.2937
verified: true
- name: loss
type: loss
value: 2.0463898181915283
verified: true
- name: gen_len
type: gen_len
value: 31.3724
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ssr-base-finetuned-samsum-en
This model is a fine-tuned version of [microsoft/ssr-base](https://huggingface.co/microsoft/ssr-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6231
- Rouge1: 46.7505
- Rouge2: 22.3968
- Rougel: 37.1784
- Rougelsum: 42.891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.9682 | 1.0 | 300 | 1.6432 | 44.2182 | 20.8486 | 35.0914 | 40.9852 |
| 1.6475 | 2.0 | 600 | 1.5946 | 45.3919 | 21.6955 | 36.2411 | 41.8532 |
| 1.5121 | 3.0 | 900 | 1.5737 | 46.1769 | 22.4178 | 36.9762 | 42.6614 |
| 1.4112 | 4.0 | 1200 | 1.5774 | 46.6047 | 22.8227 | 37.2457 | 43.1935 |
| 1.323 | 5.0 | 1500 | 1.5825 | 46.6162 | 22.485 | 37.2846 | 42.9834 |
| 1.2613 | 6.0 | 1800 | 1.5883 | 46.4253 | 22.1199 | 37.0491 | 42.5189 |
| 1.2077 | 7.0 | 2100 | 1.5965 | 46.485 | 22.3636 | 37.2677 | 42.7499 |
| 1.1697 | 8.0 | 2400 | 1.6174 | 46.8654 | 22.6291 | 37.4201 | 43.0875 |
| 1.1367 | 9.0 | 2700 | 1.6188 | 46.707 | 22.305 | 37.156 | 42.9087 |
| 1.118 | 10.0 | 3000 | 1.6231 | 46.7505 | 22.3968 | 37.1784 | 42.891 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
lakshaywadhwa1993/ner_marathi_bert | 060094605f07021b0c3b465affc54b6bc5179adc | 2022-06-09T22:01:58.000Z | [
"pytorch",
"bert",
"token-classification",
"mr",
"dataset:wikiann",
"transformers",
"autotrain_compatible"
] | token-classification | false | lakshaywadhwa1993 | null | lakshaywadhwa1993/ner_marathi_bert | 28 | null | transformers | 7,385 | ---
language: mr
datasets:
- wikiann
examples:
widget:
- text: "राज्यसभा निवडणुकांसाठी उद्या मुंबईत मतदान होणार आहे."
example_title: "Sentence_1"
- text: "विराट कोहली भारताकडून खेळतो."
example_title: "Sentence_2"
- text: "नवी दिल्ली ही भारताची राजधानी आहे"
example_title: "Sentence_3"
---
<h1>Marathi Named Entity Recognition Model trained using transfer learning</h1>
Fine-tuning bert-base-multilingual-cased on Wikiann dataset for performing NER on Marathi language.
## Label ID and its corresponding label name
Label list: (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6)
Example
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("lakshaywadhwa1993/ner_marathi_bert")
model = AutoModelForTokenClassification.from_pretrained("lakshaywadhwa1993/ner_marathi_bert")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = ["राज्यसभा","निवडणुकांसाठी","मुंबईत","भाजपचे" ,"चिंचवडचे", "आमदार", "लक्ष्मण", "जगताप"]
results = nlp(example)
results
```
|
nikitakotsehub/AirlineDistilBERT | 2a72ac93509aac84926c484b40f4001bdad75f43 | 2022-06-13T22:14:42.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | nikitakotsehub | null | nikitakotsehub/AirlineDistilBERT | 28 | null | transformers | 7,386 | Entry not found |
waboucay/camembert-large-finetuned-repnum_wl-rua_wl | e0b3eb6d8ddaf13f50d9359a010f98176c7e96a7 | 2022-06-16T07:36:43.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-large-finetuned-repnum_wl-rua_wl | 28 | null | transformers | 7,387 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 84.5 | 84.3 |
| test | 85.2 | 85.1 |
|
AlekseyKorshuk/results-gpt-j-base-erotic | 7f4cefe2dea30658129b8ad67288809c0e1bdbab | 2022-06-18T13:21:51.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/results-gpt-j-base-erotic | 28 | null | transformers | 7,388 | Entry not found |
nilaB97/bertweet-refugee | bf8eee7bcf3f8763b976e169113ad080fd953847 | 2022-07-01T15:02:51.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nilaB97 | null | nilaB97/bertweet-refugee | 28 | null | transformers | 7,389 | Entry not found |
sumitrsch/muril_large_multiconer22_hi | ef3ad6981100da40dc26be41faed2ff8ad42e277 | 2022-07-02T17:46:01.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | token-classification | false | sumitrsch | null | sumitrsch/muril_large_multiconer22_hi | 28 | 3 | transformers | 7,390 | ---
license: afl-3.0
---
This model is fine-tuned for Multiconer22 task on Hindi dataset.
hi_test.csv is preprocessed Hindi test dataset, which conll format is provided by Multiconer22 task.
This model can predict NER tag for Hindi sentnces using colab notebook https://colab.research.google.com/drive/17WyqwdoRNnzImeik6wTRE5uuj9QQnkXA#scrollTo=jQIadl5OOX1N
|
sanjay-m1/grammar-corrector-v2 | caddb28d7ad2ef60025b1fc783e876dd22a51413 | 2022-06-26T19:10:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sanjay-m1 | null | sanjay-m1/grammar-corrector-v2 | 28 | null | transformers | 7,391 | **This model is part of the Gramformer library** please refer to https://github.com/PrithivirajDamodaran/Gramformer/
|
nvidia/stt_fr_conformer_transducer_large | 65327ea5af70df919fa11ca4a8436c54ae67bd28 | 2022-06-30T19:59:08.000Z | [
"nemo",
"fr",
"dataset:multilingual_librispeech",
"dataset:mozilla-foundation/common_voice_7_0",
"dataset:VoxPopuli",
"arxiv:2005.08100",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | nvidia | null | nvidia/stt_fr_conformer_transducer_large | 28 | 3 | nemo | 7,392 | ---
language:
- fr
library_name: nemo
datasets:
- multilingual_librispeech
- mozilla-foundation/common_voice_7_0
- VoxPopuli
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Transducer
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
model-index:
- name: stt_fr_conformer_transducer_large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MCV 7.0
type: mozilla-foundation/common_voice_7_0
config: fr
split: dev
args:
language: fr
metrics:
- name: Dev WER
type: wer
value: 6.85
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MCV 7.0
type: mozilla-foundation/common_voice_7_0
config: fr
split: test
args:
language: fr
metrics:
- name: Test WER
type: wer
value: 7.95
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual Librispeech
type: multilingual_librispeech
config: fr
split: dev
args:
language: fr
metrics:
- name: Dev WER
type: wer
value: 5.05
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual Librispeech
type: multilingual_librispeech
config: fr
split: test
args:
language: fr
metrics:
- name: Test WER
type: wer
value: 4.10
---
# NVIDIA Conformer-Transducer Large (fr)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model was trained on a composite dataset comprising of over 1500 hours of French speech. It is a large size version of Conformer-Transducer (around 120M parameters).
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_fr_conformer_transducer_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_fr_conformer_transducer_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml).
The sentence-piece tokenizers [2] for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
## Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of over a thousand hours of French speech:
- MozillaCommonVoice 7.0 - 356 hours
- Multilingual LibriSpeech - 1036 hours
- VoxPopuli - 182 hours
Both models use same dataset, excluding a preprocessing step to strip hyphen from data for secondary model's training.
## Performance
The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The latest model obtains the following greedy scores on the following evaluation datasets
- 6.85 % on MCV7.0 dev
- 7.95 % on MCV7.0 test
- 5.05 % on MLS dev
- 4.10 % on MLS test
Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of hyphenation and apostrophe.
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
Further, since portions of the training set contain text from both pre- and post- 1990 orthographic reform, regularity of punctuation may vary between the two styles.
For downstream tasks requiring more consistency, finetuning or downstream processing may be required. If exact orthography is not necessary, then using secondary model is advised.
## References
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
- [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
KhawajaAbaid/distilbert-base-uncased-finetuned-emotion | 8088786fb2543691734f75c98ef39c4411f64acc | 2022-06-30T08:50:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | KhawajaAbaid | null | KhawajaAbaid/distilbert-base-uncased-finetuned-emotion | 28 | null | transformers | 7,393 | Entry not found |
Abdelmageed95/opt-125m-economy-data | 4ba66696a31d3f981882b9f737bb9367571bfdd2 | 2022-07-01T16:11:10.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-generation | false | Abdelmageed95 | null | Abdelmageed95/opt-125m-economy-data | 28 | null | transformers | 7,394 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-125m-economy-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-economy-data
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
emilys/twitter-roberta-base-dec2021-CoNLL | 490838dbd0cd6511ccbf9039aaa7d79c69e5e88a | 2022-07-05T20:09:58.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | emilys | null | emilys/twitter-roberta-base-dec2021-CoNLL | 28 | null | transformers | 7,395 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter-roberta-base-dec2021-CoNLL
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9552512940390716
- name: Recall
type: recall
value: 0.9628071356445641
- name: F1
type: f1
value: 0.9590143324113654
- name: Accuracy
type: accuracy
value: 0.9926599431486313
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-dec2021-CoNLL
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0412
- Precision: 0.9553
- Recall: 0.9628
- F1: 0.9590
- Accuracy: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.11 | 25 | 0.2126 | 0.5639 | 0.6067 | 0.5845 | 0.9349 |
| No log | 0.23 | 50 | 0.0849 | 0.8259 | 0.8612 | 0.8431 | 0.9765 |
| No log | 0.34 | 75 | 0.0640 | 0.8752 | 0.8957 | 0.8853 | 0.9820 |
| No log | 0.45 | 100 | 0.0572 | 0.8848 | 0.9034 | 0.8940 | 0.9832 |
| No log | 0.57 | 125 | 0.0469 | 0.9071 | 0.9239 | 0.9155 | 0.9866 |
| No log | 0.68 | 150 | 0.0442 | 0.9198 | 0.9278 | 0.9238 | 0.9877 |
| No log | 0.8 | 175 | 0.0424 | 0.9192 | 0.9322 | 0.9256 | 0.9881 |
| No log | 0.91 | 200 | 0.0407 | 0.9170 | 0.9414 | 0.9291 | 0.9891 |
| No log | 1.02 | 225 | 0.0402 | 0.9264 | 0.9403 | 0.9333 | 0.9894 |
| No log | 1.14 | 250 | 0.0399 | 0.9329 | 0.9446 | 0.9387 | 0.9897 |
| No log | 1.25 | 275 | 0.0384 | 0.9278 | 0.9413 | 0.9345 | 0.9897 |
| No log | 1.36 | 300 | 0.0363 | 0.9379 | 0.9477 | 0.9427 | 0.9906 |
| No log | 1.48 | 325 | 0.0362 | 0.9380 | 0.9493 | 0.9436 | 0.9905 |
| No log | 1.59 | 350 | 0.0364 | 0.9397 | 0.9497 | 0.9447 | 0.9905 |
| No log | 1.7 | 375 | 0.0367 | 0.9324 | 0.9475 | 0.9399 | 0.9899 |
| No log | 1.82 | 400 | 0.0372 | 0.9350 | 0.9460 | 0.9404 | 0.9899 |
| No log | 1.93 | 425 | 0.0339 | 0.9411 | 0.9514 | 0.9462 | 0.9909 |
| No log | 2.05 | 450 | 0.0336 | 0.9419 | 0.9529 | 0.9474 | 0.9911 |
| No log | 2.16 | 475 | 0.0336 | 0.9447 | 0.9537 | 0.9492 | 0.9914 |
| 0.079 | 2.27 | 500 | 0.0345 | 0.9420 | 0.9566 | 0.9492 | 0.9914 |
| 0.079 | 2.39 | 525 | 0.0364 | 0.9436 | 0.9522 | 0.9479 | 0.9913 |
| 0.079 | 2.5 | 550 | 0.0340 | 0.9479 | 0.9514 | 0.9496 | 0.9916 |
| 0.079 | 2.61 | 575 | 0.0339 | 0.9481 | 0.9559 | 0.9520 | 0.9917 |
| 0.079 | 2.73 | 600 | 0.0396 | 0.9326 | 0.9504 | 0.9414 | 0.9902 |
| 0.079 | 2.84 | 625 | 0.0348 | 0.9461 | 0.9544 | 0.9502 | 0.9915 |
| 0.079 | 2.95 | 650 | 0.0359 | 0.9419 | 0.9527 | 0.9473 | 0.9908 |
| 0.079 | 3.07 | 675 | 0.0347 | 0.9434 | 0.9573 | 0.9503 | 0.9916 |
| 0.079 | 3.18 | 700 | 0.0351 | 0.9464 | 0.9566 | 0.9515 | 0.9918 |
| 0.079 | 3.3 | 725 | 0.0370 | 0.9446 | 0.9536 | 0.9491 | 0.9911 |
| 0.079 | 3.41 | 750 | 0.0358 | 0.9462 | 0.9583 | 0.9522 | 0.9917 |
| 0.079 | 3.52 | 775 | 0.0353 | 0.9483 | 0.9564 | 0.9523 | 0.9920 |
| 0.079 | 3.64 | 800 | 0.0351 | 0.9469 | 0.9564 | 0.9516 | 0.9916 |
| 0.079 | 3.75 | 825 | 0.0361 | 0.9479 | 0.9579 | 0.9529 | 0.9919 |
| 0.079 | 3.86 | 850 | 0.0370 | 0.9498 | 0.9581 | 0.9539 | 0.9918 |
| 0.079 | 3.98 | 875 | 0.0374 | 0.9460 | 0.9574 | 0.9517 | 0.9915 |
| 0.079 | 4.09 | 900 | 0.0381 | 0.9506 | 0.9594 | 0.9550 | 0.9922 |
| 0.079 | 4.2 | 925 | 0.0415 | 0.9460 | 0.9557 | 0.9509 | 0.9912 |
| 0.079 | 4.32 | 950 | 0.0390 | 0.9493 | 0.9556 | 0.9524 | 0.9917 |
| 0.079 | 4.43 | 975 | 0.0389 | 0.9483 | 0.9591 | 0.9536 | 0.9919 |
| 0.0123 | 4.55 | 1000 | 0.0379 | 0.9464 | 0.9569 | 0.9516 | 0.9918 |
| 0.0123 | 4.66 | 1025 | 0.0376 | 0.9463 | 0.9579 | 0.9521 | 0.9920 |
| 0.0123 | 4.77 | 1050 | 0.0373 | 0.9499 | 0.9571 | 0.9535 | 0.9917 |
| 0.0123 | 4.89 | 1075 | 0.0366 | 0.9520 | 0.9584 | 0.9552 | 0.9923 |
| 0.0123 | 5.0 | 1100 | 0.0374 | 0.9488 | 0.9606 | 0.9547 | 0.9923 |
| 0.0123 | 5.11 | 1125 | 0.0393 | 0.9516 | 0.9589 | 0.9552 | 0.9920 |
| 0.0123 | 5.23 | 1150 | 0.0389 | 0.9539 | 0.9603 | 0.9571 | 0.9925 |
| 0.0123 | 5.34 | 1175 | 0.0397 | 0.9486 | 0.9576 | 0.9531 | 0.9917 |
| 0.0123 | 5.45 | 1200 | 0.0397 | 0.9478 | 0.9569 | 0.9523 | 0.9919 |
| 0.0123 | 5.57 | 1225 | 0.0388 | 0.9483 | 0.9593 | 0.9537 | 0.9920 |
| 0.0123 | 5.68 | 1250 | 0.0389 | 0.9502 | 0.9606 | 0.9554 | 0.9923 |
| 0.0123 | 5.8 | 1275 | 0.0380 | 0.9547 | 0.9616 | 0.9582 | 0.9925 |
| 0.0123 | 5.91 | 1300 | 0.0391 | 0.9496 | 0.9603 | 0.9549 | 0.9924 |
| 0.0123 | 6.02 | 1325 | 0.0381 | 0.9548 | 0.9603 | 0.9575 | 0.9924 |
| 0.0123 | 6.14 | 1350 | 0.0400 | 0.9529 | 0.9596 | 0.9562 | 0.9922 |
| 0.0123 | 6.25 | 1375 | 0.0393 | 0.9544 | 0.9616 | 0.9580 | 0.9927 |
| 0.0123 | 6.36 | 1400 | 0.0419 | 0.9514 | 0.9621 | 0.9567 | 0.9924 |
| 0.0123 | 6.48 | 1425 | 0.0415 | 0.9532 | 0.9626 | 0.9579 | 0.9925 |
| 0.0123 | 6.59 | 1450 | 0.0415 | 0.952 | 0.9613 | 0.9566 | 0.9923 |
| 0.0123 | 6.7 | 1475 | 0.0399 | 0.9542 | 0.9611 | 0.9577 | 0.9925 |
| 0.0052 | 6.82 | 1500 | 0.0416 | 0.9522 | 0.9591 | 0.9556 | 0.9921 |
| 0.0052 | 6.93 | 1525 | 0.0410 | 0.9502 | 0.9599 | 0.9550 | 0.9919 |
| 0.0052 | 7.05 | 1550 | 0.0406 | 0.9507 | 0.9613 | 0.9560 | 0.9921 |
| 0.0052 | 7.16 | 1575 | 0.0400 | 0.9508 | 0.9603 | 0.9555 | 0.9923 |
| 0.0052 | 7.27 | 1600 | 0.0402 | 0.9525 | 0.9618 | 0.9571 | 0.9924 |
| 0.0052 | 7.39 | 1625 | 0.0401 | 0.9550 | 0.9633 | 0.9591 | 0.9925 |
| 0.0052 | 7.5 | 1650 | 0.0397 | 0.9555 | 0.9647 | 0.9601 | 0.9927 |
| 0.0052 | 7.61 | 1675 | 0.0412 | 0.9526 | 0.9610 | 0.9568 | 0.9922 |
| 0.0052 | 7.73 | 1700 | 0.0419 | 0.9531 | 0.9616 | 0.9574 | 0.9923 |
| 0.0052 | 7.84 | 1725 | 0.0407 | 0.9555 | 0.9621 | 0.9588 | 0.9927 |
| 0.0052 | 7.95 | 1750 | 0.0409 | 0.9551 | 0.9628 | 0.9589 | 0.9927 |
| 0.0052 | 8.07 | 1775 | 0.0413 | 0.9520 | 0.9616 | 0.9568 | 0.9924 |
| 0.0052 | 8.18 | 1800 | 0.0414 | 0.9505 | 0.9605 | 0.9555 | 0.9923 |
| 0.0052 | 8.3 | 1825 | 0.0410 | 0.9542 | 0.9605 | 0.9573 | 0.9924 |
| 0.0052 | 8.41 | 1850 | 0.0417 | 0.9553 | 0.9599 | 0.9576 | 0.9924 |
| 0.0052 | 8.52 | 1875 | 0.0418 | 0.9545 | 0.9606 | 0.9576 | 0.9923 |
| 0.0052 | 8.64 | 1900 | 0.0414 | 0.9544 | 0.9616 | 0.9580 | 0.9924 |
| 0.0052 | 8.75 | 1925 | 0.0419 | 0.9555 | 0.9620 | 0.9587 | 0.9925 |
| 0.0052 | 8.86 | 1950 | 0.0415 | 0.9544 | 0.9611 | 0.9577 | 0.9926 |
| 0.0052 | 8.98 | 1975 | 0.0413 | 0.9542 | 0.9611 | 0.9577 | 0.9926 |
| 0.0027 | 9.09 | 2000 | 0.0412 | 0.9553 | 0.9628 | 0.9590 | 0.9927 |
| 0.0027 | 9.2 | 2025 | 0.0408 | 0.9554 | 0.9630 | 0.9592 | 0.9927 |
| 0.0027 | 9.32 | 2050 | 0.0404 | 0.9545 | 0.9613 | 0.9579 | 0.9926 |
| 0.0027 | 9.43 | 2075 | 0.0407 | 0.9557 | 0.9618 | 0.9587 | 0.9926 |
| 0.0027 | 9.55 | 2100 | 0.0410 | 0.9552 | 0.9618 | 0.9585 | 0.9926 |
| 0.0027 | 9.66 | 2125 | 0.0412 | 0.9552 | 0.9620 | 0.9586 | 0.9925 |
| 0.0027 | 9.77 | 2150 | 0.0413 | 0.9557 | 0.9621 | 0.9589 | 0.9925 |
| 0.0027 | 9.89 | 2175 | 0.0413 | 0.9557 | 0.9621 | 0.9589 | 0.9925 |
| 0.0027 | 10.0 | 2200 | 0.0413 | 0.9557 | 0.9621 | 0.9589 | 0.9925 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
luztraplet/roberta-large-finetuned-boolq | 7518bfb59bf6a5c0e40fee46314f7a99e95951c0 | 2022-07-06T15:49:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:boolq",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | luztraplet | null | luztraplet/roberta-large-finetuned-boolq | 28 | null | transformers | 7,396 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- boolq
model-index:
- name: roberta-large-finetuned-boolq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-boolq
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the boolq dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3791
- eval_accuracy: 0.8459
- eval_runtime: 95.3733
- eval_samples_per_second: 34.286
- eval_steps_per_second: 4.288
- epoch: 2.0
- step: 588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nielsr/videomae-base-short | 086f2e2091dba2e533fc324bddb3ef3a2d1c7692 | 2022-07-08T15:21:42.000Z | [
"pytorch",
"videomae",
"transformers"
] | null | false | nielsr | null | nielsr/videomae-base-short | 28 | null | transformers | 7,397 | Entry not found |
pyronear/mobilenet_v3_small | a6a0b39ca1f5b0a247eb0a2e83f06cd95fc03674 | 2022-07-17T23:48:39.000Z | [
"pytorch",
"onnx",
"dataset:pyronear/openfire",
"arxiv:1905.02244",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | pyronear | null | pyronear/mobilenet_v3_small | 28 | null | transformers | 7,398 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# MobileNet V3 - Small model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The MobileNet V3 architecture was introduced in [this paper](https://arxiv.org/pdf/1905.02244.pdf).
## Model description
The core idea of the author is to simplify the final stage, while using SiLU as activations and making Squeeze-and-Excite blocks larger.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/mobilenet_v3_small").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1905-02244,
author = {Andrew Howard and
Mark Sandler and
Grace Chu and
Liang{-}Chieh Chen and
Bo Chen and
Mingxing Tan and
Weijun Wang and
Yukun Zhu and
Ruoming Pang and
Vijay Vasudevan and
Quoc V. Le and
Hartwig Adam},
title = {Searching for MobileNetV3},
journal = {CoRR},
volume = {abs/1905.02244},
year = {2019},
url = {http://arxiv.org/abs/1905.02244},
eprinttype = {arXiv},
eprint = {1905.02244},
timestamp = {Thu, 27 May 2021 16:20:51 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{chintala_torchvision_2017,
author = {Chintala, Soumith},
month = {4},
title = {{Torchvision}},
url = {https://github.com/pytorch/vision},
year = {2017}
}
``` |
mhdr78/finetuned_parsinlu_en_fa | 7973562fac07b2b0f3370fc23ee842978af773a3 | 2022-07-15T05:16:22.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mhdr78 | null | mhdr78/finetuned_parsinlu_en_fa | 28 | 1 | transformers | 7,399 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: finetuned_parsinlu_en_fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_parsinlu_en_fa
This model is a fine-tuned version of [persiannlp/mt5-small-parsinlu-translation_en_fa](https://huggingface.co/persiannlp/mt5-small-parsinlu-translation_en_fa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5214
- Bleu: 13.5318
- Gen Len: 12.1251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.7125 | 1.0 | 30987 | 1.5265 | 13.4269 | 12.127 |
| 1.6943 | 2.0 | 61974 | 1.5214 | 13.5318 | 12.1251 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.