modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_test_set_05_03_2022-06_01_05 | d34193ddcbe9cf5879c04af7430c70c93b2d8119 | 2022-03-05T05:01:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_test_set_05_03_2022-06_01_05 | 59 | null | transformers | 5,700 | Entry not found |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_essays_05_03_2022-06_11_09 | 2b465a929662f098c4a050c843e6e64f5e64823f | 2022-03-05T05:13:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_essays_05_03_2022-06_11_09 | 59 | null | transformers | 5,701 | Entry not found |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_editorials_05_03_2022-06_13_50 | 87446f91c03aed2c096d6d16e8d6b095f808a221 | 2022-03-05T05:16:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_editorials_05_03_2022-06_13_50 | 59 | null | transformers | 5,702 | Entry not found |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_webDiscourse_05_03_2022-06_19_06 | efac69bdc4889991ad9faf65154f75846bff6f71 | 2022-03-05T05:21:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_webDiscourse_05_03_2022-06_19_06 | 59 | null | transformers | 5,703 | Entry not found |
huggingtweets/baguioni | 67dbe15cf7af4773208bcc319822138dd04d6c01 | 2022-03-27T22:55:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/baguioni | 59 | 1 | transformers | 5,704 | ---
language: en
thumbnail: http://www.huggingtweets.com/baguioni/1648421716784/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506662013707046914/hVtCPrPL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">baguio</div>
<div style="text-align: center; font-size: 14px;">@baguioni</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from baguio.
| Data | baguio |
| --- | --- |
| Tweets downloaded | 3012 |
| Retweets | 1090 |
| Short tweets | 527 |
| Tweets kept | 1395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z9nh9v8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @baguioni's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/baguioni')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tdrenis/finetuned-bot-detector | fb0865bf8e6e889844825e656f8bf252944019b6 | 2022-04-06T20:31:05.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | tdrenis | null | tdrenis/finetuned-bot-detector | 59 | null | transformers | 5,705 | Student project that fine-tuned the roberta-base-openai-detector model on the Twibot-20 dataset. |
ArthurZ/opt-350m | 5e09c00aa179327fedda07d221042cc7c32f61ab | 2022-06-21T20:24:40.000Z | [
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | ArthurZ | null | ArthurZ/opt-350m | 59 | null | transformers | 5,706 | ---
license: apache-2.0
---
|
fabiochiu/t5-base-tag-generation | 2f6b24e40ad4ec1788e91bfea7de30f42a733609 | 2022-05-23T13:46:13.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | fabiochiu | null | fabiochiu/t5-base-tag-generation | 59 | 1 | transformers | 5,707 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-tag-generation
results: []
widget:
- text: "Python is a high-level, interpreted, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically-typed and garbage-collected."
example_title: "Programming"
---
# Model description
This model is [t5-base](https://huggingface.co/t5-base) fine-tuned on the [190k Medium Articles](https://www.kaggle.com/datasets/fabiochiusano/medium-articles) dataset for predicting article tags using the article textual content as input. While usually formulated as a multi-label classification problem, this model deals with _tag generation_ as a text2text generation task (inspiration from [text2tags](https://huggingface.co/efederici/text2tags)).
# How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import nltk
nltk.download('punkt')
tokenizer = AutoTokenizer.from_pretrained("fabiochiu/t5-base-tag-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("fabiochiu/t5-base-tag-generation")
text = """
Python is a high-level, interpreted, general-purpose programming language. Its
design philosophy emphasizes code readability with the use of significant
indentation. Python is dynamically-typed and garbage-collected.
"""
inputs = tokenizer([text], max_length=512, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10,
max_length=64)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
tags = list(set(decoded_output.strip().split(", ")))
print(tags)
# ['Programming', 'Code', 'Software Development', 'Programming Languages',
# 'Software', 'Developer', 'Python', 'Software Engineering', 'Science',
# 'Engineering', 'Technology', 'Computer Science', 'Coding', 'Digital', 'Tech',
# 'Python Programming']
```
## Data cleaning
The dataset is composed of Medium articles and their tags. However, each Medium article can have at most five tags, therefore the author needs to choose what he/she believes are the best tags (mainly for SEO-related purposes). This means that an article with the "Python" tag may have not the "Programming Languages" tag, even though the first implies the latter.
To clean the dataset accounting for this problem, a hand-made taxonomy of about 1000 tags was built. Using the taxonomy, the tags of each articles have been augmented (e.g. an article with the "Python" tag will have the "Programming Languages" tag as well, as the taxonomy says that "Python" is part of "Programming Languages"). The taxonomy is not public, if you are interested in it please send an email at [email protected].
## Training and evaluation data
The model has been trained on a single epoch spanning about 50000 articles, evaluating on 1000 random articles not used during training.
## Evaluation results
- eval_loss: 0.8474
- eval_rouge1: 38.6033
- eval_rouge2: 20.5952
- eval_rougeL: 36.4458
- eval_rougeLsum: 36.3202
- eval_gen_len: 15.257 # average number of generated tokens
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
autoevaluate/multi-class-classification | 79771c00c9552278050dc1ef39d694ad71d51fbc | 2022-06-02T12:25:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | autoevaluate | null | autoevaluate/multi-class-classification | 59 | null | transformers | 5,708 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: multi-class-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
verified: true
- name: Precision Macro
type: precision
value: 0.8738350796775306
verified: true
- name: Precision Micro
type: precision
value: 0.9185
verified: true
- name: Precision Weighted
type: precision
value: 0.9179425177997311
verified: true
- name: Recall Macro
type: recall
value: 0.8650962919021573
verified: true
- name: Recall Micro
type: recall
value: 0.9185
verified: true
- name: Recall Weighted
type: recall
value: 0.9185
verified: true
- name: F1 Macro
type: f1
value: 0.8692821860210945
verified: true
- name: F1 Micro
type: f1
value: 0.9185
verified: true
- name: F1 Weighted
type: f1
value: 0.9181177508591364
verified: true
- name: loss
type: loss
value: 0.20905950665473938
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.8920254536671932
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-class-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2009
- Accuracy: 0.928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2643 | 1.0 | 1000 | 0.2009 | 0.928 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/arstechnica | b341e91e121507278f4c54880fad71ac8e687c14 | 2022-06-20T06:05:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/arstechnica | 59 | null | transformers | 5,709 | ---
language: en
thumbnail: http://www.huggingtweets.com/arstechnica/1655705137296/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2215576731/ars-logo_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ars Technica</div>
<div style="text-align: center; font-size: 14px;">@arstechnica</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ars Technica.
| Data | Ars Technica |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 27 |
| Short tweets | 0 |
| Tweets kept | 3223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2n328dqy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @arstechnica's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/koacg5oh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/koacg5oh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/arstechnica')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fujiki/t5-efficient-xl-nl12-en2ja | 5099abd04699a806c353e3f276f09c47e4aa9b31 | 2022-07-04T02:18:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-efficient-xl-nl12-en2ja | 59 | null | transformers | 5,710 | ---
license: afl-3.0
---
|
ckb/toki-en-mt | ad3e608a405d5c5c96f3b3cd5561b19efab102d9 | 2022-07-09T10:57:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tok",
"en",
"transformers",
"generated_from_trainer",
"translation",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | ckb | null | ckb/toki-en-mt | 59 | null | transformers | 5,711 | ---
license: apache-2.0
language:
- tok
- en
tags:
- generated_from_trainer
- translation
metrics:
- bleu
model-index:
- name: toki-en-mt
results: []
widget:
- text: "toki! mi jan Ton. mi lon ma Tawan."
- text: "soweli li toki ala toki e toki Inli?"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toki-en-mt
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ROMANCE-en](https://huggingface.co/Helsinki-NLP/opus-mt-ROMANCE-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2840
- Bleu: 26.7612
- Gen Len: 9.0631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.7228 | 1.0 | 1260 | 1.4572 | 19.9464 | 9.2177 |
| 1.3182 | 2.0 | 2520 | 1.3356 | 22.4628 | 9.1263 |
| 1.1241 | 3.0 | 3780 | 1.3028 | 23.5152 | 9.0462 |
| 0.9995 | 4.0 | 5040 | 1.2784 | 23.9526 | 9.1697 |
| 0.8945 | 5.0 | 6300 | 1.2739 | 24.7707 | 9.0914 |
| 0.8331 | 6.0 | 7560 | 1.2725 | 25.3477 | 9.0518 |
| 0.7641 | 7.0 | 8820 | 1.2770 | 26.165 | 9.0245 |
| 0.7163 | 8.0 | 10080 | 1.2809 | 25.8053 | 9.0933 |
| 0.6886 | 9.0 | 11340 | 1.2799 | 26.5752 | 9.0669 |
| 0.6627 | 10.0 | 12600 | 1.2840 | 26.7612 | 9.0631 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
p-christ/QandAClassifier | a4a4bbaea9defc398dc588b2ecf958cfb4bf4e8c | 2022-07-11T16:49:51.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | p-christ | null | p-christ/QandAClassifier | 59 | null | transformers | 5,712 | Entry not found |
OATML-Markslab/Tranception | d1ef627127760862bc0615ee74372ad4b23b2193 | 2022-07-18T15:25:35.000Z | [
"pytorch",
"tranception",
"fill-mask",
"arxiv:2205.13760",
"transformers",
"autotrain_compatible"
] | fill-mask | false | OATML-Markslab | null | OATML-Markslab/Tranception | 59 | 3 | transformers | 5,713 | # Tranception model
This Hugging Face Hub repo contains the model checkpoint for the Tranception model as described in our paper ["Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval"](https://arxiv.org/abs/2205.13760). The official GitHub repository can be accessed [here](https://github.com/OATML-Markslab/Tranception). This project is a joint collaboration between the [Marks lab](https://www.deboramarkslab.com/) and the [OATML group](https://oatml.cs.ox.ac.uk/).
## Abstract
The ability to accurately model the fitness landscape of protein sequences is critical to a wide range of applications, from quantifying the effects of human variants on disease likelihood, to predicting immune-escape mutations in viruses and designing novel biotherapeutic proteins. Deep generative models of protein sequences trained on multiple sequence alignments have been the most successful approaches so far to address these tasks. The performance of these methods is however contingent on the availability of sufficiently deep and diverse alignments for reliable training. Their potential scope is thus limited by the fact many protein families are hard, if not impossible, to align. Large language models trained on massive quantities of non-aligned protein sequences from diverse families address these problems and show potential to eventually bridge the performance gap. We introduce Tranception, a novel transformer architecture leveraging autoregressive predictions and retrieval of homologous sequences at inference to achieve state-of-the-art fitness prediction performance. Given its markedly higher performance on multiple mutants, robustness to shallow alignments and ability to score indels, our approach offers significant gain of scope over existing approaches. To enable more rigorous model testing across a broader range of protein families, we develop ProteinGym -- an extensive set of multiplexed assays of variant effects, substantially increasing both the number and diversity of assays compared to existing benchmarks.
## License
This project is available under the MIT license.
## Reference
If you use Tranception or other files provided through our GitHub repository, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
Pre-print: https://arxiv.org/abs/2205.13760
GitHub: https://github.com/OATML-Markslab/Tranception |
eclat12450/fine-tuned-NSPbert-12 | 6f0e086fb7fcff6415008a989bb1f57177c45708 | 2022-07-20T10:14:54.000Z | [
"pytorch",
"bert",
"next-sentence-prediction",
"transformers"
] | null | false | eclat12450 | null | eclat12450/fine-tuned-NSPbert-12 | 59 | null | transformers | 5,714 | Entry not found |
trickstters/DialoGPT-small-evanbot | fc7149e47717f2b9a0d58356d93be12c37c7db5b | 2022-07-27T09:01:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | trickstters | null | trickstters/DialoGPT-small-evanbot | 59 | null | transformers | 5,715 | ---
tags:
- conversational
---
# a |
CalvinHuang/mt5-small-finetuned-amazon-en-es | b240a4a8723a1dcd7e8b4438341f7fd8c3e1bae4 | 2022-02-02T03:50:37.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | CalvinHuang | null | CalvinHuang/mt5-small-finetuned-amazon-en-es | 58 | 1 | transformers | 5,716 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0393
- Rouge1: 17.2936
- Rouge2: 8.0678
- Rougel: 16.8129
- Rougelsum: 16.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.6665 | 1.0 | 1209 | 3.2917 | 13.912 | 5.595 | 13.2984 | 13.4171 |
| 3.8961 | 2.0 | 2418 | 3.1711 | 16.2845 | 8.6033 | 15.5509 | 15.7383 |
| 3.5801 | 3.0 | 3627 | 3.0917 | 17.316 | 8.122 | 16.697 | 16.773 |
| 3.4258 | 4.0 | 4836 | 3.0583 | 16.1347 | 7.7829 | 15.6475 | 15.7804 |
| 3.3154 | 5.0 | 6045 | 3.0573 | 17.5918 | 8.7349 | 17.0537 | 17.2216 |
| 3.2438 | 6.0 | 7254 | 3.0479 | 17.2294 | 8.0383 | 16.8141 | 16.9858 |
| 3.2024 | 7.0 | 8463 | 3.0377 | 17.2918 | 8.139 | 16.8178 | 16.9671 |
| 3.1745 | 8.0 | 9672 | 3.0393 | 17.2936 | 8.0678 | 16.8129 | 16.9991 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DarshanDeshpande/marathi-distilbert | b0fe751059df2cb52a8dd72a2aa5c4c346e2809a | 2021-03-23T08:20:29.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"mr",
"dataset:Oscar Corpus, News, Stories",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | DarshanDeshpande | null | DarshanDeshpande/marathi-distilbert | 58 | 3 | transformers | 5,717 | ---
language:
- mr
tags:
- fill-mask
license: apache-2.0
datasets:
- Oscar Corpus, News, Stories
widget:
- text: "हा खरोखर चांगला [MASK] आहे."
---
# Marathi DistilBERT
## Model description
This model is an adaptation of DistilBERT (Victor Sanh et al., 2019) for Marathi language. This version of Marathi-DistilBERT is trained from scratch on approximately 11.2 million sentences.
```
DISCLAIMER
This model has not been thoroughly tested and may contain biased opinions or inappropriate language. User discretion is advised
```
## Training data
The training data has been extracted from a variety of sources, mainly including:
1. Oscar Corpus
2. Marathi Newspapers
3. Marathi storybooks and articles
The data is cleaned by removing all languages other than Marathi, while preserving common punctuations
## Training procedure
The model is trained from scratch using an Adam optimizer with a learning rate of 1e-4 and default β1 and β2 values of 0.9 and 0.999 respectively with a total batch size of 256 on a v3-8 TPU and mask probability of 15%.
## Example
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="DarshanDeshpande/marathi-distilbert",
tokenizer="DarshanDeshpande/marathi-distilbert",
)
fill_mask("हा खरोखर चांगला [MASK] आहे.")
```
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<h3>Authors </h3>
<h5>1. Darshan Deshpande: <a href="https://github.com/DarshanDeshpande">GitHub</a>, <a href="https://www.linkedin.com/in/darshan-deshpande/">LinkedIn</a><h5>
<h5>2. Harshavardhan Abichandani: <a href="https://github.com/Baras64">GitHub</a>, <a href="http://www.linkedin.com/in/harsh-abhi">LinkedIn</a><h5> |
Helsinki-NLP/opus-mt-en-rw | 6cd0e5c91a2e1a06838ab3eae0d8fc45d257a2d9 | 2021-09-09T21:38:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"rw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-rw | 58 | null | transformers | 5,718 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-rw
* source languages: en
* target languages: rw
* OPUS readme: [en-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.rw | 33.3 | 0.569 |
| Tatoeba.en.rw | 13.8 | 0.503 |
|
IlyaGusev/sber_rut5_filler | c7f5e2b9cbb3937f2632f827364b8e922f788e42 | 2022-07-13T15:34:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | IlyaGusev | null | IlyaGusev/sber_rut5_filler | 58 | 1 | transformers | 5,719 | ---
language:
- ru
license: apache-2.0
widget:
- text: Эта блядь меня заебала</s> Эта <extra_id_0> меня <extra_id_1>
--- |
ankur310794/bart-base-keyphrase-generation-openkp | 92001c83e3f1533ca560f083cf38db2a2390fd11 | 2021-04-09T08:43:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ankur310794 | null | ankur310794/bart-base-keyphrase-generation-openkp | 58 | null | transformers | 5,720 | Entry not found |
danghuy1999/gpt2-viwiki | c1e1934242d48aabd63b2f9ce4d5aa025ecf7c27 | 2021-08-08T17:59:19.000Z | [
"pytorch",
"tf",
"gpt2",
"vi",
"transformers",
"gpt2-viwiki",
"license:mit"
] | null | false | danghuy1999 | null | danghuy1999/gpt2-viwiki | 58 | 1 | transformers | 5,721 | ---
language: vi
tags:
- gpt2-viwiki
license: mit
---
# GPT-2 Fine-tuning in Vietnamese Wikipedia
## Model description
This is a Vietnamese GPT-2 model which is finetuned on the [Latest pages articles of Vietnamese Wikipedia](https://dumps.wikimedia.org/viwiki/latest/viwiki-latest-pages-articles.xml.bz2).
## Dataset
The dataset is about 800MB, includes many articles from Wikipedia.
## How to use
You can use this model to:
- Tokenize Vietnamese sentences with GPT2Tokenizer.
- Generate text seems like a Wikipedia article.
- Finetune it to other downstream tasks.
Here is how to use the model to generate text in Pytorch:
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('danghuy1999/gpt2-viwiki')
model = GPT2LMHeadModel.from_pretrained('danghuy1999/gpt2-viwiki').to('cuda')
text = "Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử"
input_ids = tokenizer.encode(text, return_tensors='pt').to('cuda')
max_length = 100
sample_outputs = model.generate(input_ids,pad_token_id=tokenizer.eos_token_id,
do_sample=True,
max_length=max_length,
min_length=max_length,
top_k=40,
num_beams=5,
early_stopping=True,
no_repeat_ngram_size=2,
num_return_sequences=3)
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
print('\n---')
```
And the results are:
```bash
>> Generated text 1
Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử.
Mặc dù thuyết tương đối tổng quát không được áp dụng rộng rãi trong nhiều lĩnh vực khác nhau, nhưng các nhà lý thuyết đã đưa ra khái niệm rộng hơn về tính chất của vật chất. Một trong những nghiên cứu của Albert Einstein về sự tồn tại của hệ quy chiếu quán tính, ông đã đề xuất rằng một lực hấp dẫn có thể có khối lượng bằng năng lượng của nó. Tuy nhiên, những người cho rằng
---
>> Generated text 2
Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Tuy nhiên, thuyết tương đối hẹp không phải là lý thuyết của Einstein.
Cho đến tận cuối thế kỷ 19, Albert Einstein đã chứng minh được sự tồn tại của lực hấp dẫn trong một số trường hợp đặc biệt. Năm 1915, ông đưa ra khái niệm "khối lượng" để miêu tả chuyển động lượng của một hạt bằng khối lượng nghỉ của nó. Ông cho rằng năng lượng "m" là một thành phần của
---
>> Generated text 3
Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Tuy nhiên, thuyết tương đối hẹp không được chấp nhận rộng rãi bởi các nhà lý thuyết.
Một trong những nghiên cứu của Einstein về tính chất của lực hấp dẫn là vào năm 1905, ông đã đưa ra một khái niệm về lực học. Ông đã phát biểu rằng nếu một hạt mang điện tích dương, nó có thể chuyển đổi năng lượng của nó thành các hạt khác. Năm 1915, Arthur Eddington phát minh ra
---
```
You can do the same with **Tensorflow** by using the model **TFGPT2Tokenizer** instead.
|
educhav/J-DialoGPT-small | 5850d899162791adedf231186fe7a4c1580a21bb | 2022-01-22T02:35:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | educhav | null | educhav/J-DialoGPT-small | 58 | null | transformers | 5,722 | ---
tags:
- conversational
---
# J Cole Patt |
ethzanalytics/ai-msgbot-gpt2-XL-dialogue | 91b073bed82d78b0149d5e59fca58d07f99188f1 | 2022-02-22T15:46:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:natural questions",
"transformers",
"gpt",
"license:mit"
] | text-generation | false | ethzanalytics | null | ethzanalytics/ai-msgbot-gpt2-XL-dialogue | 58 | 1 | transformers | 5,723 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "Do you like my new haircut?\nperson beta:\n\n"
example_title: "haircut"
- text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n"
example_title: "teaching"
- text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n"
example_title: "favorite"
- text: "how much does it cost?\nperson beta:\n\n"
example_title: "money"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
---
# ai-msgbot GPT2-XL-dialogue
_NOTE: model card is WIP_
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`. The resulting model was then **further fine-tuned** on the [Daily Dialogues](http://yanran.li/dailydialog) for 40k steps, with **34**/36 layers frozen.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
``` |
funnel-transformer/large-base | cebf5df5b78c98badc256dfdf302d9403fd08bae | 2020-12-11T21:40:28.000Z | [
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | funnel-transformer | null | funnel-transformer/large-base | 58 | null | transformers | 5,724 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer large model (B8-8-8 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `large` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
huggingtweets/destiny_thememe | a87042566e8e9c010f3d80f1d54f97eda0fcee43 | 2021-05-22T01:27:17.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/destiny_thememe | 58 | null | transformers | 5,725 | ---
language: en
thumbnail: https://www.huggingtweets.com/destiny_thememe/1616803427645/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1372396296699404291/SySu1wAp_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">♦️Moira Perfected♦️ 🤖 AI Bot </div>
<div style="font-size: 15px">@destiny_thememe bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@destiny_thememe's tweets](https://twitter.com/destiny_thememe).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 186 |
| Short tweets | 772 |
| Tweets kept | 2284 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bbkix40/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @destiny_thememe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20xpitr1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20xpitr1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/destiny_thememe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/muratpak | 5b92089b726101c723bb60dd38647a1f057b085c | 2021-10-18T17:22:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/muratpak | 58 | null | transformers | 5,726 | ---
language: en
thumbnail: https://www.huggingtweets.com/muratpak/1634577747584/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442159742558765064/RFB5JjIk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pak</div>
<div style="text-align: center; font-size: 14px;">@muratpak</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pak.
| Data | Pak |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 686 |
| Short tweets | 964 |
| Tweets kept | 1600 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1s58abff/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @muratpak's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/30zzcgkm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/30zzcgkm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/muratpak')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
joeddav/distilbert-base-uncased-agnews-student | 41503004a8bc8c31721fc37a384f406544862d97 | 2021-02-18T20:41:19.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"en",
"dataset:ag_news",
"transformers",
"tensorflow",
"license:mit"
] | text-classification | false | joeddav | null | joeddav/distilbert-base-uncased-agnews-student | 58 | null | transformers | 5,727 | ---
language: en
tags:
- text-classification
- pytorch
- tensorflow
datasets:
- ag_news
license: mit
widget:
- text: "Armed conflict has been a near-constant policial and economic burden."
- text: "Tom Brady won his seventh Super Bowl last night."
- text: "Dow falls more than 100 points after disappointing jobs data"
- text: "A new moon has been discovered in Jupter's orbit."
---
# distilbert-base-uncased-agnews-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using [this
script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation).
It is the result of the demo notebook
[here](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing), where more details
about the model can be found.
- Teacher model: [roberta-large-mnli](https://huggingface.co/roberta-large-mnli)
- Teacher hypothesis template: `"This text is about {}."`
## Intended Usage
The model can be used like any other model trained on AG's News, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student.
|
jonatasgrosman/paraphrase | 73b6aede4e8f82dddd37fd14f2124e9088c94356 | 2021-05-23T06:01:26.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jonatasgrosman | null | jonatasgrosman/paraphrase | 58 | null | transformers | 5,728 | testing
|
m3hrdadfi/xlmr-large-qa-fa | 81c1e240f5b2def9e1db967d23693e4182a8b556 | 2021-10-12T08:36:53.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"question-answering",
"fa",
"multilingual",
"dataset:SajjadAyoubi/persian_qa",
"transformers",
"roberta",
"squad",
"model-index",
"autotrain_compatible"
] | question-answering | false | m3hrdadfi | null | m3hrdadfi/xlmr-large-qa-fa | 58 | null | transformers | 5,729 | ---
language:
- fa
- multilingual
tags:
- question-answering
- xlm-roberta
- roberta
- squad
datasets:
- SajjadAyoubi/persian_qa
metrics:
- squad_v2
widget:
- text: "کاربردهای لاپلاسین؟"
context: "معادلهٔ لاپلاس یک معادله دیفرانسیل با مشتقات جزئی است که از اهمّیّت و کاربرد فراوانی در ریاضیّات، فیزیک، و مهندسی برخوردار است. به عنوان چند نمونه میشود به زمینههایی همچون الکترومغناطیس، ستارهشناسی، و دینامیک سیالات اشاره کرد که حلّ این معادله در آنها کاربرد دارد."
- text: "نام دیگر شب یلدا؟"
context: "شب یَلدا یا شب چلّه یکی از کهنترین جشنهای ایرانی است. در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها در نیمکرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته میشود. نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانیاست."
- text: "کهن ترین جشن ایرانیها چه است؟"
context: "شب یَلدا یا شب چلّه یکی از کهنترین جشنهای ایرانی است. در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها در نیمکرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته میشود. نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانیاست."
- text: "شب یلدا مصادف با چه پدیدهای است؟"
context: "شب یَلدا یا شب چلّه یکی از کهنترین جشنهای ایرانی است. در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها در نیمکرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته میشود. نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانیاست."
model-index:
- name: XLM-RoBERTa large for QA (PersianQA - 🇮🇷)
results:
- task:
type: question-answering
name: Question Answering
dataset:
type: SajjadAyoubi/persian_qa
name: PersianQA
args: fa
metrics:
- type: squad_v2
value: 83.46
name: Eval F1
args: max_order
- type: squad_v2
value: 66.88
name: Eval Exact
args: max_order
---
# XLM-RoBERTa large for QA (PersianQA - 🇮🇷)
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the [PersianQA](https://github.com/sajjjadayobi/PersianQA) dataset.
## Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
- mixed_precision_training: Native AMP
## Performance
Evaluation results on the eval set with the official [eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
### Evalset
```text
"HasAns_exact": 58.678955453149,
"HasAns_f1": 82.3746683591845,
"HasAns_total": 651,
"NoAns_exact": 86.02150537634408,
"NoAns_f1": 86.02150537634408,
"NoAns_total": 279,
"exact": 66.88172043010752,
"f1": 83.46871946433232,
"total": 930
```
## Usage
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name_or_path = "m3hrdadfi/xlmr-large-qa-fa"
nlp = pipeline('question-answering', model=model_name_or_path, tokenizer=model_name_or_path)
context = """
شب یَلدا یا شب چلّه یکی از کهنترین جشنهای ایرانی است.
در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها
در نیمکرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته میشود.
نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانیاست.
"""
# Translation [EN]
# context = [
# Yalda night or Cheleh night is one of the oldest Iranian celebrations.
# The festival celebrates the longest night of the year, followed by longer days in the Northern Hemisphere,
# which coincides with the Winter Revolution.
# Another name for this night is "Chelleh", because holding this celebration is an Iranian ritual.
# ]
questions = [
"نام دیگر شب یلدا؟",
"کهن ترین جشن ایرانیها چه است؟",
"شب یلدا مصادف با چه پدیدهای است؟"
]
# Translation [EN]
# questions = [
# Another name for Yalda night?
# What is the ancient tradition of Iranian celebration?
# What phenomenon does Yalda night coincide with?
# ]
kwargs = {}
for question in questions:
r = nlp(question=question, context=context, **kwargs)
answer = " ".join([token.strip() for token in r["answer"].strip().split() if token.strip()])
print(f"{question} {answer}")
```
**Output**
```text
نام دیگر شب یلدا؟ «چِلّه»
کهن ترین جشن ایرانیها چه است؟ شب یَلدا یا شب چلّه
شب یلدا مصادف با چه پدیدهای است؟ انقلاب زمستانی
# Translation [EN]
# Another name for Yalda night? Cheleh night
# What is the ancient tradition of Iranian celebration? Yalda night or Chele night
# What phenomenon does Yalda night coincide with? Winter revolution
```
## Authors
- [Mehrdad Farahani](https://github.com/m3hrdadfi)
## Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
neuralspace-reverie/indic-transformers-bn-roberta | 3ba101b2b34f7a2af1c5595f1d4be6598e692e5b | 2021-05-20T18:47:17.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"bn",
"transformers",
"MaskedLM",
"Bengali",
"RoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
] | fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-bn-roberta | 58 | null | transformers | 5,730 | ---
language:
- bn
tags:
- MaskedLM
- Bengali
- RoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Bengali RoBERTa
## Model description
This is a RoBERTa language model pre-trained on ~6 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-bn-roberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-bn-roberta')
text = "আপনি কেমন আছেন?"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 10, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
salesken/text_generate | e11e6a9123081a2f1e467b8dca90930c50d7c265 | 2021-05-23T12:38:21.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers",
"salesken"
] | text-generation | false | salesken | null | salesken/text_generate | 58 | 1 | transformers | 5,731 | ---
tags: salesken
widget:
- text: "Which name is also used to describe the Amazon rainforest in English? "
---
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else :
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/text_generate")
model = AutoModelWithLMHead.from_pretrained("salesken/text_generate").to(device)
input_query="tough challenges make you stronger. "
input_ids = tokenizer.encode(input_query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=1024,
temperature=0.99,
top_k = 10,
num_return_sequences=1)
for i in range(len(sample_outputs)):
print(tokenizer.decode(sample_outputs[i], skip_special_tokens=True))
``` |
ykacer/bert-base-cased-imdb-sequence-classification | 05012cac18df27f33acd2c512f16a48542108c2f | 2021-05-20T09:31:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:imdb",
"transformers",
"sequence",
"classification",
"license:apache-2.0"
] | text-classification | false | ykacer | null | ykacer/bert-base-cased-imdb-sequence-classification | 58 | null | transformers | 5,732 |
---
language:
- en
thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png
tags:
- sequence
- classification
license: apache-2.0
datasets:
- imdb
metrics:
- accuracy
---
|
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_test_set_05_03_2022-05_48_17 | d5e6f66dc9c1a2f9ce82e32b176a1c642d284878 | 2022-03-05T04:50:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_test_set_05_03_2022-05_48_17 | 58 | null | transformers | 5,733 | Entry not found |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_test_set_05_03_2022-05_53_50 | 0e2c93d656a0817944f32b15c541813f518169ee | 2022-03-05T04:56:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_test_set_05_03_2022-05_53_50 | 58 | null | transformers | 5,734 | Entry not found |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_webDiscourse_05_03_2022-06_08_23 | 563420fe1d72c7a8d9968a9ec3ce1bbe805aa384 | 2022-03-05T05:11:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_webDiscourse_05_03_2022-06_08_23 | 58 | null | transformers | 5,735 | Entry not found |
l3cube-pune/marathi-ner | 7cec0d26b321405a2da5415fc01314a5fafe9eb5 | 2022-06-26T15:15:30.000Z | [
"pytorch",
"bert",
"token-classification",
"mr",
"dataset:L3Cube-MahaNER",
"arxiv:2204.06029",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | l3cube-pune | null | l3cube-pune/marathi-ner | 58 | null | transformers | 5,736 | ---
language: mr
tags:
license: cc-by-4.0
datasets:
- L3Cube-MahaNER
widget:
- text: "I like you. </s></s> I love you."
---
## MahaNER-BERT
MahaNER-BERT is a MahaBERT(l3cube-pune/marathi-bert) model fine-tuned on L3Cube-MahaNER - a Marathi named entity recognition dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.06029)
```
@InProceedings{litake-EtAl:2022:WILDRE6,
author = {Litake, Onkar and Sabane, Maithili Ravindra and Patil, Parth Sachin and Ranade, Aparna Abhijeet and Joshi, Raviraj},
title = {L3Cube-MahaNER: A Marathi Named Entity Recognition Dataset and BERT models},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {29--34}
}
``` |
wanyu/IteraTeR-BART-Revision-Generator | b4fd9ef8529abba0610b1d73c56d7cdb5143d70f | 2022-04-04T20:09:49.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:IteraTeR_full_sent",
"arxiv:2203.03802",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | wanyu | null | wanyu/IteraTeR-BART-Revision-Generator | 58 | null | transformers | 5,737 | ---
datasets:
- IteraTeR_full_sent
---
# IteraTeR BART model
This model was obtained by fine-tuning [facebook/bart-base](https://huggingface.co/facebook/bart-base) on [IteraTeR-full-sent](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset.
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) <br>
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
## Text Revision Task
Given an edit intention and an original sentence, our model can generate a revised sentence.<br>
The edit intentions are provided by [IteraTeR-full-sent](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset, which are categorized as follows:
<table>
<tr>
<th>Edit Intention</th>
<th>Definition</th>
<th>Example</th>
</tr>
<tr>
<td>clarity</td>
<td>Make the text more formal, concise, readable and understandable.</td>
<td>
Original: It's like a house which anyone can enter in it. <br>
Revised: It's like a house which anyone can enter.
</td>
</tr>
<tr>
<td>fluency</td>
<td>Fix grammatical errors in the text.</td>
<td>
Original: In the same year he became the Fellow of the Royal Society. <br>
Revised: In the same year, he became the Fellow of the Royal Society.
</td>
</tr>
<tr>
<td>coherence</td>
<td>Make the text more cohesive, logically linked and consistent as a whole.</td>
<td>
Original: Achievements and awards Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy. <br>
Revised: Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy.
</td>
</tr>
<tr>
<td>style</td>
<td>Convey the writer’s writing preferences, including emotions, tone, voice, etc..</td>
<td>
Original: She was last seen on 2005-10-22. <br>
Revised: She was last seen on October 22, 2005.
</td>
</tr>
</table>
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("wanyu/IteraTeR-BART-Revision-Generator")
model = AutoModelForSeq2SeqLM.from_pretrained("wanyu/IteraTeR-BART-Revision-Generator")
before_input = '<fluency> I likes coffee.'
model_input = tokenizer(before_input, return_tensors='pt')
model_outputs = model.generate(**model_input, num_beams=8, max_length=1024)
after_text = tokenizer.batch_decode(model_outputs, skip_special_tokens=True)[0]
``` |
arghya007/roberta-scarcasm-discriminator | 7e3e92ff796c499b68daefff45369064dd3f95b2 | 2022-04-10T17:42:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | arghya007 | null | arghya007/roberta-scarcasm-discriminator | 58 | null | transformers | 5,738 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-en-ces_slk | 0850dc863d8df39712a09a8068d23a3acca4fb17 | 2022-06-01T13:03:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ces",
"slk",
"cs",
"sk",
"en",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-ces_slk | 58 | null | transformers | 5,739 | ---
language:
- ces
- slk
- cs
- sk
- en
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-ces_slk
results:
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: flores101-devtest
type: flores_101
args: eng ces devtest
metrics:
- name: BLEU
type: bleu
value: 34.1
- task:
name: Translation eng-slk
type: translation
args: eng-slk
dataset:
name: flores101-devtest
type: flores_101
args: eng slk devtest
metrics:
- name: BLEU
type: bleu
value: 35.9
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 33.4
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 33.4
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: news-test2008
type: news-test2008
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 22.8
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 47.5
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2009
type: wmt-2009-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 24.3
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2010
type: wmt-2010-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 24.4
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2011
type: wmt-2011-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 25.5
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2012
type: wmt-2012-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 22.6
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2013
type: wmt-2013-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 27.4
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2014
type: wmt-2014-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 31.4
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2015
type: wmt-2015-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 27.0
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2016
type: wmt-2016-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 29.9
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2017
type: wmt-2017-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 24.9
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2018
type: wmt-2018-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 24.6
- task:
name: Translation eng-ces
type: translation
args: eng-ces
dataset:
name: newstest2019
type: wmt-2019-news
args: eng-ces
metrics:
- name: BLEU
type: bleu
value: 26.4
---
# opus-mt-tc-big-en-ces_slk
Neural machine translation model for translating from English (en) to Czech and Slovak (ces+slk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): ces
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ces+slk/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-ces+slk README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ces+slk/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ces<< We were enemies.",
">>ces<< Do you think Tom knows what's going on?"
]
model_name = "pytorch-models/opus-mt-tc-big-en-ces_slk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Byli jsme nepřátelé.
# Myslíš, že Tom ví, co se děje?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ces_slk")
print(pipe(">>ces<< We were enemies."))
# expected output: Byli jsme nepřátelé.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ces+slk/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ces+slk/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-ces | tatoeba-test-v2021-08-07 | 0.66128 | 47.5 | 13824 | 91332 |
| eng-ces | flores101-devtest | 0.60411 | 34.1 | 1012 | 22101 |
| eng-slk | flores101-devtest | 0.62415 | 35.9 | 1012 | 22543 |
| eng-ces | multi30k_test_2016_flickr | 0.58547 | 33.4 | 1000 | 10503 |
| eng-ces | multi30k_test_2018_flickr | 0.59236 | 33.4 | 1071 | 11631 |
| eng-ces | newssyscomb2009 | 0.52702 | 25.3 | 502 | 10032 |
| eng-ces | news-test2008 | 0.50286 | 22.8 | 2051 | 42484 |
| eng-ces | newstest2009 | 0.52152 | 24.3 | 2525 | 55533 |
| eng-ces | newstest2010 | 0.52527 | 24.4 | 2489 | 52955 |
| eng-ces | newstest2011 | 0.52721 | 25.5 | 3003 | 65653 |
| eng-ces | newstest2012 | 0.50007 | 22.6 | 3003 | 65456 |
| eng-ces | newstest2013 | 0.53643 | 27.4 | 3000 | 57250 |
| eng-ces | newstest2014 | 0.58944 | 31.4 | 3003 | 59902 |
| eng-ces | newstest2015 | 0.55094 | 27.0 | 2656 | 45858 |
| eng-ces | newstest2016 | 0.56864 | 29.9 | 2999 | 56998 |
| eng-ces | newstest2017 | 0.52504 | 24.9 | 3005 | 54361 |
| eng-ces | newstest2018 | 0.52490 | 24.6 | 2983 | 54652 |
| eng-ces | newstest2019 | 0.53994 | 26.4 | 1997 | 43113 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 16:46:48 EEST 2022
* port machine: LM0-400-22516.local
|
charityking2358/taglish-electra | 566e2edf3c9dcc7755d91f73df3e5d51944d6c74 | 2022-04-26T02:19:48.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra | 58 | null | transformers | 5,740 | ## Taglish-Electra
Our Taglish-Electra model was pretrained with two Filipino training datasets and one English dataset to increase improvement against Filipino text with English where speakers may code-switch between the two languages.
1) Openwebtext (English)
2) WikiText-TL-39 (Filipino)
3) [TLUnified Large Scale Corpus](https://www.blaisecruz.com/resources/)
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models. |
Jackett/detr_test | 829cf95c562b6f28dc31c89e897c7aeb9270137a | 2022-05-17T07:54:37.000Z | [
"pytorch",
"detr",
"object-detection",
"transformers"
] | object-detection | false | Jackett | null | Jackett/detr_test | 58 | null | transformers | 5,741 | Entry not found |
smc/PANDA_ConvNeXT | d5d34bfd500b5e013766f4eae4933cb4371ce3fa | 2022-05-25T20:19:25.000Z | [
"pytorch",
"convnext",
"image-classification",
"transformers",
"model-index"
] | image-classification | false | smc | null | smc/PANDA_ConvNeXT | 58 | 1 | transformers | 5,742 | ---
tags:
- image-classification
- pytorch
metrics:
- accuracy
- Cohen's Kappa
model-index:
- name: PANDA_ConvNeXT
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5491307377815247
- name: Quadratic Cohen's Kappa
type: Quadratic Cohen's Kappa
value: 0.6630877256393433
---
# PANDA_ConvNeXT
An attempt to use a ConvNeXT for medical image classification (ISUP grading in prostate histopathology images). Currently uses a tiled and concatenated WSI as input
Example Images (1152,1152,3) 36 WSI patches:
ISUP 0:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c02d3bb3a62519b31c63d0301c6843e_0.jpeg">
ISUP 1:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0cee71ab57422e04f76e09ef2186fcd5_1.jpeg">
ISUP 2:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/00bbc1482301d16de3ff63238cfd0b34_2.jpeg">
ISUP 3:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c5c2d16c0f2e399b7be641e7e7f66d9_3.jpeg">
ISUP 4:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c88d7c7033e2048b1068e208b105270_4.jpeg">
ISUP 5:
<img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/00c15b23b30a5ba061358d9641118904_5.jpeg"> |
RUCAIBox/mvp-summarization | 8b0bfb62f0c9b8a2b8ff05d3812903536afa3d4e | 2022-06-27T02:28:20.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"summarization",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mvp-summarization | 58 | null | transformers | 5,743 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- summarization
pipeline_tag: text2text-generation
widget:
- text: "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons."
example_title: "Example1"
- text: "Summarize: Jorge Alfaro drove in two runs, Aaron Nola pitched seven innings of two-hit ball and the Philadelphia Phillies beat the Los Angeles Dodgers 2-1 Thursday, spoiling Clayton Kershaw's first start in almost a month. Hitting out of the No. 8 spot in the ..."
example_title: "Example2"
---
# MVP-summarization
The MVP-summarization model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-summarization is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled summarization datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-summarization is specially designed for summarization tasks, such as new summarization (CNN/DailyMail, XSum) and dialog summarization (SAMSum).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization")
>>> inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
AryaSuprana/BRATA_RoBERTaBali | 7b7518cbe564d329a9869059874d8b160cb25e20 | 2022-06-11T05:01:40.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"ban",
"dataset:WikiBali",
"dataset:Suara Saking Bali",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AryaSuprana | null | AryaSuprana/BRATA_RoBERTaBali | 58 | null | transformers | 5,744 | ---
language: "ban"
datasets:
- WikiBali
- Suara Saking Bali
widget:
- text: "Kalsium silih <mask> datu kimia antuk simbol Ca miwah wilangan atom 20."
example_title: "Conto 1"
- text: "Tabuan inggih <mask> silih tunggil soroh beburon sane madue kampid."
example_title: "Conto 2"
---
BRATA (Basa Bali Used for Pretraining RoBERTa) is a pretrained language model trained using Basa Bali or Balinese Language with RoBERTa-base-uncased configuration. The datasets used for this pretraining were collected by extracting WikiBali or Wikipedia Basa Bali and some sources from Suara Saking Bali website. The pretrained language model trained using Google Colab Pro with Tesla P100-PCIE-16GB GPU. Pretraining process used 200 epoch and 2 batch size. The smallest training loss can be seen in Training metrics or Metrics tab. |
Anjoe/gbert-large | 0d03ce7b19d93c48c43bf2cd9640af58bcfaa536 | 2022-06-25T17:08:58.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Anjoe | null | Anjoe/gbert-large | 58 | null | transformers | 5,745 | ---
tags:
- generated_from_trainer
model-index:
- name: gbert-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-large
This model is a fine-tuned version of deepset/gbert-large
It was fine-tuned on poetry from Projekt Gutenberg in order to do masked language modeling tasks in poetry generation (synonym creation for rythm and to find rhyming pairs)
- Loss: 2.1519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7515 | 1.0 | 2481 | 2.5493 |
| 2.5626 | 2.0 | 4962 | 2.4392 |
| 2.4839 | 3.0 | 7443 | 2.3692 |
| 2.4082 | 4.0 | 9924 | 2.3425 |
| 2.3109 | 5.0 | 12405 | 2.2934 |
| 2.2551 | 6.0 | 14886 | 2.2582 |
| 2.2154 | 7.0 | 17367 | 2.2062 |
| 2.2003 | 8.0 | 19848 | 2.1962 |
| 2.1616 | 9.0 | 22329 | 2.1991 |
| 2.1462 | 10.0 | 24810 | 2.1519 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ArnavL/roberta-base-agnews-0 | 9e3cf04110af9982ee2b4c18a7d310fd35d6255e | 2022-07-12T00:34:24.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ArnavL | null | ArnavL/roberta-base-agnews-0 | 58 | null | transformers | 5,746 | Entry not found |
kaj/evoker | 640000717de9e5645bd524c391b2271564abd2db | 2022-07-26T02:34:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | kaj | null | kaj/evoker | 58 | 1 | transformers | 5,747 | ---
license: mit
---
|
Salesforce/qaconv-unifiedqa-t5-base | ff060b2e623e10ebabcfec840809eb4ddd7f92d2 | 2021-06-23T10:13:43.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Salesforce | null | Salesforce/qaconv-unifiedqa-t5-base | 57 | null | transformers | 5,748 | Entry not found |
TransQuest/microtransquest-en_lv-pharmaceutical-nmt | 983d0506c36d60d468a60a16b4e4a0a9f799a48b | 2021-06-04T08:21:54.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"en-lv",
"transformers",
"Quality Estimation",
"microtransquest",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | TransQuest | null | TransQuest/microtransquest-en_lv-pharmaceutical-nmt | 57 | null | transformers | 5,749 | ---
language: en-lv
tags:
- Quality Estimation
- microtransquest
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_lv-pharmaceutical-nmt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
UBC-NLP/AraT5-msa-small | dcf01ca943e90f2777d38c36d27d090203096a34 | 2022-05-26T18:27:22.000Z | [
"pytorch",
"tf",
"t5",
"ar",
"transformers",
"Arabic T5",
"MSA",
"Twitter",
"Arabic Dialect",
"Arabic Machine Translation",
"Arabic Text Summarization",
"Arabic News Title and Question Generation",
"Arabic Paraphrasing and Transliteration",
"Arabic Code-Switched Translation"
] | null | false | UBC-NLP | null | UBC-NLP/AraT5-msa-small | 57 | 2 | transformers | 5,750 | ---
language:
- ar
tags:
- Arabic T5
- MSA
- Twitter
- Arabic Dialect
- Arabic Machine Translation
- Arabic Text Summarization
- Arabic News Title and Question Generation
- Arabic Paraphrasing and Transliteration
- Arabic Code-Switched Translation
---
# AraT5-msa-small
# AraT5: Text-to-Text Transformers for Arabic Language Generation
<img src="https://huggingface.co/UBC-NLP/AraT5-base/resolve/main/AraT5_CR_new.png" alt="AraT5" width="45%" height="35%" align="right"/>
This is the repository accompanying our paper [AraT5: Text-to-Text Transformers for Arabic Language Understanding and Generation](https://aclanthology.org/2022.acl-long.47/). In this is the repository we Introduce **AraT5<sub>MSA</sub>**, **AraT5<sub>Tweet</sub>**, and **AraT5**: three powerful Arabic-specific text-to-text Transformer based models;
---
# How to use AraT5 models
Below is an example for fine-tuning **AraT5-base** for News Title Generation on the Aranews dataset
``` bash
!python run_trainier_seq2seq_huggingface.py \
--learning_rate 5e-5 \
--max_target_length 128 --max_source_length 128 \
--per_device_train_batch_size 8 --per_device_eval_batch_size 8 \
--model_name_or_path "UBC-NLP/AraT5-base" \
--output_dir "/content/AraT5_FT_title_generation" --overwrite_output_dir \
--num_train_epochs 3 \
--train_file "/content/ARGEn_title_genration_sample_train.tsv" \
--validation_file "/content/ARGEn_title_genration_sample_valid.tsv" \
--task "title_generation" --text_column "document" --summary_column "title" \
--load_best_model_at_end --metric_for_best_model "eval_bleu" --greater_is_better True --evaluation_strategy epoch --logging_strategy epoch --predict_with_generate\
--do_train --do_eval
```
For more details about the fine-tuning example, please read this notebook [](https://github.com/UBC-NLP/araT5/blob/main/examples/Fine_tuning_AraT5.ipynb)
In addition, we release the fine-tuned checkpoint of the News Title Generation (NGT) which is described in the paper. The model available at Huggingface ([UBC-NLP/AraT5-base-title-generation](https://huggingface.co/UBC-NLP/AraT5-base-title-generation)).
For more details, please visit our own [GitHub](https://github.com/UBC-NLP/araT5).
# AraT5 Models Checkpoints
AraT5 Pytorch and TensorFlow checkpoints are available on the Huggingface website for direct download and use ```exclusively for research```. ```For commercial use, please contact the authors via email @ (muhammad.mageed[at]ubc[dot]ca).```
| **Model** | **Link** |
|---------|:------------------:|
| **AraT5-base** | [https://huggingface.co/UBC-NLP/AraT5-base](https://huggingface.co/UBC-NLP/AraT5-base) |
| **AraT5-msa-base** | [https://huggingface.co/UBC-NLP/AraT5-msa-base](https://huggingface.co/UBC-NLP/AraT5-msa-base) |
| **AraT5-tweet-base** | [https://huggingface.co/UBC-NLP/AraT5-tweet-base](https://huggingface.co/UBC-NLP/AraT5-tweet-base) |
| **AraT5-msa-small** | [https://huggingface.co/UBC-NLP/AraT5-msa-small](https://huggingface.co/UBC-NLP/AraT5-msa-small) |
| **AraT5-tweet-small**| [https://huggingface.co/UBC-NLP/AraT5-tweet-small](https://huggingface.co/UBC-NLP/AraT5-tweet-small) |
# BibTex
If you use our models (Arat5-base, Arat5-msa-base, Arat5-tweet-base, Arat5-msa-small, or Arat5-tweet-small ) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{nagoudi-etal-2022-arat5,
title = "{A}ra{T}5: Text-to-Text Transformers for {A}rabic Language Generation",
author = "Nagoudi, El Moatez Billah and
Elmadany, AbdelRahim and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.47",
pages = "628--647",
abstract = "Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialects{--}Arabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration (ARGEN), covering seven important tasks. For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN. Although pre-trained with {\textasciitilde}49 less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs. Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021). Our new models are publicly available. We also link to ARGEN datasets through our repository: https://github.com/UBC-NLP/araT5.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
abhishek/autonlp-japanese-sentiment-59362 | 984f7c46692dcda527a9c985a47b115318bcfd13 | 2021-05-18T22:55:03.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ja",
"dataset:abhishek/autonlp-data-japanese-sentiment",
"transformers",
"autonlp"
] | text-classification | false | abhishek | null | abhishek/autonlp-japanese-sentiment-59362 | 57 | 1 | transformers | 5,751 | ---
tags: autonlp
language: ja
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-japanese-sentiment
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 59362
## Validation Metrics
- Loss: 0.13092292845249176
- Accuracy: 0.9527127414314258
- Precision: 0.9634070704982427
- Recall: 0.9842171959602166
- AUC: 0.9667289746092403
- F1: 0.9737009564152002
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-japanese-sentiment-59362
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-japanese-sentiment-59362", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
akdeniz27/bert-base-turkish-cased-ner | 53ddeb6e865ad7248f002ea6461c5b5fb0ecfa46 | 2021-10-14T19:50:00.000Z | [
"pytorch",
"bert",
"token-classification",
"tr",
"transformers",
"autotrain_compatible"
] | token-classification | false | akdeniz27 | null | akdeniz27/bert-base-turkish-cased-ner | 57 | 1 | transformers | 5,752 | ---
language: tr
widget:
- text: "Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı."
---
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of "dbmdz/bert-base-turkish-cased"
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/bert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9933935699477056
* f1: 0.9592969472710453
* precision: 0.9543530277931161
* recall: 0.9642923563325274
Evaluation results with the test sets proposed in ["Küçük, D., Küçük, D., Arıcı, N. 2016. Türkçe Varlık İsmi Tanıma için bir Veri Kümesi ("A Named Entity Recognition Dataset for Turkish"). IEEE Sinyal İşleme, İletişim ve Uygulamaları Kurultayı. Zonguldak, Türkiye."](https://ieeexplore.ieee.org/document/7495744) paper.
* Test Set Acc. Prec. Rec. F1-Score
* 20010000 0.9946 0.9871 0.9463 0.9662
* 20020000 0.9928 0.9134 0.9206 0.9170
* 20030000 0.9942 0.9814 0.9186 0.9489
* 20040000 0.9943 0.9660 0.9522 0.9590
* 20050000 0.9971 0.9539 0.9932 0.9732
* 20060000 0.9993 0.9942 0.9942 0.9942
* 20070000 0.9970 0.9806 0.9439 0.9619
* 20080000 0.9988 0.9821 0.9649 0.9735
* 20090000 0.9977 0.9891 0.9479 0.9681
* 20100000 0.9961 0.9684 0.9293 0.9485
* Overall 0.9961 0.9720 0.9516 0.9617 |
averyanalex/panorama-rugpt3large | e1adee6678c3a362036fa572cd9bad40e4c3ba66 | 2022-01-13T12:54:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | averyanalex | null | averyanalex/panorama-rugpt3large | 57 | null | transformers | 5,753 | Entry not found |
cahya/wav2vec2-large-xlsr-turkish-artificial-cv | c082543f4ffe0da89ff44b02a0ec7c7dc65afdff | 2021-07-06T00:02:23.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-turkish-artificial-cv | 57 | null | transformers | 5,754 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 14.61
---
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish-Artificial-CV, a fine-tuned
[cahya/wav2vec2-large-xlsr-turkish-artificial](https://huggingface.co/cahya/wav2vec2-large-xlsr-turkish-artificial)
model on [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial-cv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 14.61 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
cosmoquester/bart-ko-base | ed861421e349d4f8bed6667fb2dd8848eb24dce3 | 2021-08-28T05:12:02.000Z | [
"pytorch",
"tf",
"bart",
"text2text-generation",
"ko",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cosmoquester | null | cosmoquester/bart-ko-base | 57 | 1 | transformers | 5,755 | ---
language: ko
---
# Pretrained BART in Korean
This is pretrained BART model with multiple Korean Datasets.
I used multiple datasets for generalizing the model for both colloquial and written texts.
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain).
When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example.
```
[BOS] 안녕하세요? 반가워요~~ [EOS]
```
You can also test mask filling performance using `[MASK]` token like this.
```
[BOS] [MASK] 먹었어? [EOS]
```
## Benchmark
<style>
table {
border-collapse: collapse;
border-style: hidden;
width: 100%;
}
td, th {
border: 1px solid #4d5562;
padding: 8px;
}
</style>
<table>
<tr>
<th>Dataset</th>
<td>KLUE NLI dev</th>
<td>NSMC test</td>
<td>QuestionPair test</td>
<td colspan="2">KLUE TC dev</td>
<td colspan="3">KLUE STS dev</td>
<td colspan="3">KorSTS dev</td>
<td colspan="2">HateSpeech dev</td>
</tr>
<tr>
<th>Metric</th>
<!-- KLUE NLI -->
<td>Acc</th>
<!-- NSMC -->
<td>Acc</td>
<!-- QuestionPair -->
<td>Acc</td>
<!-- KLUE TC -->
<td>Acc</td>
<td>F1</td>
<!-- KLUE STS -->
<td>F1</td>
<td>Pearson</td>
<td>Spearman</td>
<!-- KorSTS -->
<td>F1</td>
<td>Pearson</td>
<td>Spearman</td>
<!-- HateSpeech -->
<td>Bias Acc</td>
<td>Hate Acc</td>
</tr>
<tr>
<th>Score</th>
<!-- KLUE NLI -->
<td>0.7390</th>
<!-- NSMC -->
<td>0.8877</td>
<!-- QuestionPair -->
<td>0.9208</td>
<!-- KLUE TC -->
<td>0.8667</td>
<td>0.8637</td>
<!-- KLUE STS -->
<td>0.7654</td>
<td>0.8090</td>
<td>0.8040</td>
<!-- KorSTS -->
<td>0.8067</td>
<td>0.7909</td>
<td>0.7784</td>
<!-- HateSpeech -->
<td>0.8280</td>
<td>0.5669</td>
</tr>
</table>
- The performance was measured using [the notebooks here](https://github.com/cosmoquester/transformers-bart-finetune) with colab.
## Used Datasets
### [모두의 말뭉치](https://corpus.korean.go.kr/)
- 일상 대화 말뭉치 2020
- 구어 말뭉치
- 문어 말뭉치
- 신문 말뭉치
### AIhub
- [개방데이터 전문분야말뭉치](https://aihub.or.kr/aidata/30717)
- [개방데이터 한국어대화요약](https://aihub.or.kr/aidata/30714)
- [개방데이터 감성 대화 말뭉치](https://aihub.or.kr/aidata/7978)
- [개방데이터 한국어 음성](https://aihub.or.kr/aidata/105)
- [개방데이터 한국어 SNS](https://aihub.or.kr/aidata/30718)
### [세종 말뭉치](https://ithub.korean.go.kr/)
|
gagan3012/keytotext-small | bf3e0c8f9aab3d0cdf83db549fb7003920017ee6 | 2021-03-11T23:33:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | gagan3012 | null | gagan3012/keytotext-small | 57 | null | transformers | 5,756 | # keytotext
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Model:
Two Models have been built:
- Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext
- Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small
#### Usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small")
model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small")
```
### Demo:
[](https://share.streamlit.io/gagan3012/keytotext/app.py)
https://share.streamlit.io/gagan3012/keytotext/app.py

### Example:
['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
|
google/bert_uncased_L-8_H-128_A-2 | 015be1772f53aaa32683e3bd44ab075708f25812 | 2021-05-19T17:35:05.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-8_H-128_A-2 | 57 | null | transformers | 5,757 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
jcpwfloi/gpt2-story-generation | b2acb595321a3c3cd0f05b7eb035132d8acda70b | 2021-05-23T05:48:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | jcpwfloi | null | jcpwfloi/gpt2-story-generation | 57 | null | transformers | 5,758 | Entry not found |
monologg/koelectra-base-finetuned-sentiment | 5bb90123f35cc220ed2d2ae1c157af265748c7bb | 2020-05-14T02:30:04.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | monologg | null | monologg/koelectra-base-finetuned-sentiment | 57 | null | transformers | 5,759 | Entry not found |
msintaha/bert-base-uncased-finetuned-copa-data-new | c7c9123706f1e0f45617181ccf4bd25c227cff66 | 2022-02-15T08:41:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:super_glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | msintaha | null | msintaha/bert-base-uncased-finetuned-copa-data-new | 57 | null | transformers | 5,760 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-copa-data-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-copa-data-new
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5995
- Accuracy: 0.7000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6564 | 0.6600 |
| No log | 2.0 | 50 | 0.5995 | 0.7000 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pucpr/clinicalnerpt-finding | b61d343277019091c349926424c0996488ffb414 | 2021-10-13T09:32:39.000Z | [
"pytorch",
"bert",
"token-classification",
"pt",
"dataset:SemClinBr",
"transformers",
"autotrain_compatible"
] | token-classification | false | pucpr | null | pucpr/clinicalnerpt-finding | 57 | 4 | transformers | 5,761 | ---
language: "pt"
widget:
- text: "RECEBE ALTA EM BOM ESTADO GERAL, COM PLANO DE ACOMPANHAR NO AMBULATÓRIO."
- text: "PACIENTE APRESENTOU BOA EVOLUÇÃO CLÍNICA APÓS OTIMIZAÇÃO DO TTO DA ICC."
datasets:
- SemClinBr
thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png"
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# Portuguese Clinical NER - Finding
The Finding NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
|
sail/poolformer_m48 | 752b4a4de30d3fb40c59db413fa6f9ce51be2cc0 | 2022-04-08T07:48:25.000Z | [
"pytorch",
"poolformer",
"image-classification",
"dataset:imagenet",
"arxiv:2111.11418",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | sail | null | sail/poolformer_m48 | 57 | null | transformers | 5,762 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# PoolFormer (M48 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_m48')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_m48')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| PoolFormer-S12 | 77.2 | 12M | https://huggingface.co/sail/poolformer_s12 |
| PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 |
| PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
| PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
| **PoolFormer-M48** | **82.5** | **73M** | **https://huggingface.co/sail/poolformer_m48** |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
``` |
seduerr/t5-pawraphrase | afb526cf961ea8113dbb7356df14a599eb7c3876 | 2021-06-23T14:19:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/t5-pawraphrase | 57 | null | transformers | 5,763 |
# Invoking more Creativity with Pawraphrases based on T5
## This micro-service allows to find paraphrases for a given text based on T5.

We explain how we finetune the architecture T5 with the dataset PAWS (both from Google) to get the capability of creating paraphrases (or pawphrases since we are using the PAWS dataset :smile:). With this, we can create paraphrases for any given textual input.
Find the code for the service in this [Github Repository](https://github.com/seduerr91/pawraphrase_public). In order to create your own __'pawrasphrase tool'__, follow these steps:
### Step 1: Find a Useful Architecture and Datasets
Since Google's [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) has been trained on multiple tasks (e.g., text summarization, question-answering) and it is solely based on Text-to-Text tasks it is pretty useful for extending its task-base through finetuning it with paraphrases.
Luckily, the [PAWS](https://github.com/google-research-datasets/paws) dataset consists of approximately 50.000 labeled paraphrases that we can use to fine-tune T5.
### Step 2: Prepare the PAWS Dataset for the T5 Architecture
Once identified, it is crucial to prepare the PAWS dataset to feed it into the T5 architecture for finetuning. Since PAWS is coming both with paraphrases and non-paraphases, it needs to be filtered for paraphrases only. Next, after packing it into a Pandas DataFrame, the necessary table headers had to be created. Next, you split the resulting training samples into test, train, and validation set.

### Step 3: Fine-tune T5-base with PAWS
Next, following these [training instructions](https://towardsdatascience.com/paraphrase-any-question-with-t5-text-to-text-transfer-transformer-pretrained-model-and-cbb9e35f1555), in which they used the Quora dataset, we use the PAWS dataset and feed into T5. Central is the following code to ensure that T5 understands that it has to _paraphrase_. The adapted version can be found [here](https://github.com/seduerr91/pawraphrase_public/blob/master/t5_pawraphrase_training.ipynb).

Additionally, it is helpful to force the old versions of _torch==1.4.0, transformers==2.9.0_ and *pytorch_lightning==0.7.5*, since the newer versions break (trust me, I am speaking from experience). However, when doing such training, it is straightforward to start with the smallest architecture (here, _T5-small_) and a small version of your dataset (e.g., 100 paraphrase examples) to quickly identify where the training may fail or stop.
### Step 4: Start Inference by yourself.
Next, you can use the fine-tuned T5 Architecture to create paraphrases from every input. As seen in the introductory image. The corresponding code can be found [here](https://github.com/seduerr91/pawraphrase_public/blob/master/t5_pawraphrase_inference.ipynb).
### Step 5: Using the fine-tuning through a GUI
Finally, to make the service useful we can provide it as an API as done with the infilling model [here](https://seduerr91.github.io/blog/ilm-fastapi) or with this [frontend](https://github.com/renatoviolin/T5-paraphrase-generation) which was prepared by Renato. Kudos!
Thank you for reading this article. I'd be curious about your opinion.
#### Who am I?
I am Sebastian an NLP Deep Learning Research Scientist (M.Sc. in IT and Business). In my former life, I was a manager at Austria's biggest bank. In the future, I want to work remotely flexibly & in the field of NLP.
Drop me a message on [LinkedIn](https://www.linkedin.com/in/sebastianduerr/) if you want to get in touch! |
textattack/xlnet-large-cased-SST-2 | d2e7fe2c3bc11df01de21b76788559740b688921 | 2020-06-09T16:59:05.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/xlnet-large-cased-SST-2 | 57 | 0 | transformers | 5,764 | Entry not found |
tiedeman/opus-mt-en-he | 4fb656f23568471c76209d2582cd27a7645f8c15 | 2021-03-04T17:50:20.000Z | [
"pytorch",
"rust",
"marian",
"text2text-generation",
"en",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | tiedeman | null | tiedeman/opus-mt-en-he | 57 | null | transformers | 5,765 | ---
language:
- en
- he
tags:
- translation
license: apache-2.0
---
### en-he
* source group: English
* target group: Hebrew
* OPUS readme: [eng-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md)
* model: transformer
* source language(s): eng
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.heb | 37.9 | 0.602 |
### System Info:
- hf_name: en-he
- source_languages: eng
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'he']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus-2020-10-04.test.txt
- src_alpha3: eng
- tgt_alpha3: heb
- chrF2_score: 0.602
- bleu: 37.9
- brevity_penalty: 1.0
- ref_len: 60359.0
- src_name: English
- tgt_name: Hebrew
- train_date: 2020-10-04 00:00:00
- src_alpha2: en
- tgt_alpha2: he
- prefer_old: False
- short_pair: en-he
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: d99ed7ad618037ae878f0758157ed0764bd7f935
- port_machine: LM0-400-22516.local
- port_time: 2020-10-15-16:31 |
microsoft/tapex-base-finetuned-tabfact | 64c585e9e0784c175d8eddb0ee17974c260f1766 | 2022-07-14T10:09:42.000Z | [
"pytorch",
"bart",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2107.07653",
"transformers",
"tapex",
"license:mit"
] | text-classification | false | microsoft | null | microsoft/tapex-base-finetuned-tabfact | 57 | null | transformers | 5,766 | ---
language: en
tags:
- tapex
datasets:
- tab_fact
license: mit
---
# TAPEX (base-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.
## Intended Uses
You can use the model for table fact verficiation.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("microsoft/tapex-base-finetuned-tabfact")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "beijing hosts the olympic games in 2012"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model(**encoding)
output_id = int(outputs.logits[0].argmax(dim=0))
print(model.config.id2label[output_id])
# Refused
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25 | 37d6511313dffd6e304cab976cb111b879f29857 | 2022-03-01T13:24:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25 | 57 | null | transformers | 5,767 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2698
- Precision: 0.3321
- Recall: 0.5265
- F1: 0.4073
- Accuracy: 0.8942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3314 | 0.1627 | 0.3746 | 0.2269 | 0.8419 |
| No log | 2.0 | 60 | 0.2957 | 0.2887 | 0.4841 | 0.3617 | 0.8592 |
| No log | 3.0 | 90 | 0.2905 | 0.2429 | 0.5141 | 0.3299 | 0.8651 |
| No log | 4.0 | 120 | 0.2759 | 0.3137 | 0.5565 | 0.4013 | 0.8787 |
| No log | 5.0 | 150 | 0.2977 | 0.3116 | 0.5565 | 0.3995 | 0.8796 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
jkhan447/sentiment-model-sample-group-emotion | 50852be49c7255937fb8baada134d7a65e6c6aad | 2022-03-23T08:19:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jkhan447 | null | jkhan447/sentiment-model-sample-group-emotion | 57 | null | transformers | 5,768 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-group-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-group-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4604
- Accuracy: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
afbudiman/indobert-classification | 9847311f7585cf858a1ba9cb535b3cd88e421a02 | 2022-05-06T12:54:14.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:indonlu",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | afbudiman | null | afbudiman/indobert-classification | 57 | null | transformers | 5,769 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
model-index:
- name: indobert-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9396825396825397
- name: F1
type: f1
value: 0.9393057427148881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-classification
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3707
- Accuracy: 0.9397
- F1: 0.9393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2458 | 1.0 | 688 | 0.2229 | 0.9325 | 0.9323 |
| 0.1258 | 2.0 | 1376 | 0.2332 | 0.9373 | 0.9369 |
| 0.059 | 3.0 | 2064 | 0.3389 | 0.9365 | 0.9365 |
| 0.0268 | 4.0 | 2752 | 0.3412 | 0.9421 | 0.9417 |
| 0.0097 | 5.0 | 3440 | 0.3707 | 0.9397 | 0.9393 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Finnish-NLP/byt5-base-finnish | 6b5217510aa375669642c64533388b571dd87e6f | 2022-07-12T14:22:08.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:2105.13626",
"arxiv:2002.05202",
"transformers",
"finnish",
"byt5",
"t5x",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | Finnish-NLP | null | Finnish-NLP/byt5-base-finnish | 57 | null | transformers | 5,770 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- byt5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# ByT5-base for Finnish
Pretrained ByT5 model on Finnish language using a span-based masked language modeling (MLM) objective. ByT5 was introduced in
[this paper](https://arxiv.org/abs/2105.13626)
and first released at [this page](https://github.com/google-research/byt5).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
ByT5 is an encoder-decoder model and treats all NLP problems in a text-to-text format. ByT5 is a tokenizer-free extension of the T5 model. Instead of using a subword vocabulary like most other pretrained language models (BERT, XLM-R, T5, GPT-3), ByT5 model operates directly on UTF-8 bytes, removing the need for any text preprocessing. ByT5 can outperform T5 models on tasks that involve noisy text or are sensitive to spelling and pronunciation.
Finnish ByT5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective with an average span-mask of 20 UTF-8 characters. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model uses the [google/byt5-base](https://huggingface.co/google/byt5-base) architecture which means the encoder has 18 transformer layers and the decoder has 6 transformer layers. The [ByT5 paper](https://arxiv.org/abs/2105.13626) found out that "heavier" encoder is beneficial in vocabulary-free models as ByT5.
In total, this model has 582 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
**Note:** ByT5 works on raw UTF-8 bytes and can be used without a tokenizer. For batched inference & training it is however recommended using a tokenizer class for padding.
Here is how to use this model in PyTorch:
```python
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/byt5-base-finnish")
```
and in TensorFlow:
```python
from transformers import TFT5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/byt5-base-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The inputs and the outputs are sequences of 512 consecutive bytes. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 850K steps with a batch size of 256 (in total 111B bytes). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 bytes.
When fine-tuned on those datasets, this model (the fourth row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |TBA |TBA |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
IsaacRodgz/Tamil-Hate-Speech | 7a5db650dc1a62398891d5bf50a10451ca2cb8ec | 2022-06-02T22:05:58.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | IsaacRodgz | null | IsaacRodgz/Tamil-Hate-Speech | 57 | null | transformers | 5,771 | Entry not found |
ccdv/lsg-bart-base-16384-mediasum | f7c6158dd1c5402e5afecc3c70d0958fc9fac2f7 | 2022-07-25T05:30:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:ccdv/mediasum",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-16384-mediasum | 57 | null | transformers | 5,772 | ---
language:
- en
tags:
- summarization
datasets:
- ccdv/mediasum
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-16384-mediasum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384-mediasum", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-16384-mediasum", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-16384-mediasum
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096-mediasum](https://huggingface.co/ccdv/lsg-bart-base-4096-mediasum) on the [ccdv/mediasum roberta_prepended mediasum](https://huggingface.co/datasets/ccdv/mediasum) dataset. \
The model is converted to handle 16384 long sequences and fine-tuned accordingly during 1 epoch. \
It achieves the following results on the test set:
| Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 16384 | 64 | Full | 256 | 0 | 768 | 35.31 | 18.35 | 31.81 | 32.47 |
| 16384 | 1 | Full | 256 | 0 | 768 | 35.21 | 18.20 | 31.73 | 32.37 |
| 16384 | 64 | Global only | 256 | 0 | 768 | 35.22 | 18.08 | 31.54 | 32.21 |
| 16384 | 1 | None | 256 | 0 | 768 | 35.17 | 18.13 | 31.54 | 32.20 |
Reference model:
| Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | 1 | - | 256 | 0 | 768 | 35.16 | 18.13 | 31.54 | 32.20
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from [ccdv/lsg-bart-base-4096-mediasum](https://huggingface.co/ccdv/lsg-bart-base-4096-mediasum), converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: ccdv/mediasum
- dataset_config_name: roberta_prepended
- eval_batch_size: 8
- eval_samples: 10000
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 128
- min_length: 3
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
ClassCat/roberta-base-spanish | 99bf4a2b27347fc92a75504fa89ce98c711f1245 | 2022-07-14T09:38:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:wikipedia",
"dataset:cc100",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ClassCat | null | ClassCat/roberta-base-spanish | 57 | 1 | transformers | 5,773 | ---
language: es
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "Yo vivo en <mask>."
- text: "Quiero <mask> contigo ?"
- text: "Es clima es <mask>."
- text: "Me llamo <mask>."
- text: "Las negociaciones están <mask>."
---
## RoBERTa Spanish base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/es](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bes) (Spanish Wikipedia)
* Subset of [CC-100/es](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-spanish')
unmasker("Yo soy <mask>.")
``` |
Aktsvigun/bart-base_aeslc_705525 | a34cc8dc35ea4ed1d4ccf8d8b5419a714180e1d7 | 2022-07-07T15:46:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_705525 | 57 | null | transformers | 5,774 | Entry not found |
huggingtweets/frnsw-nswrfs-nswses | e98941d314f6a2f12f178225d9c318960231e426 | 2022-07-06T14:32:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/frnsw-nswrfs-nswses | 57 | null | transformers | 5,775 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1150678663265832960/ujqrCyuu_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/895892720194957313/RVLTWlDI_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500778204294180868/3B6rKocs_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NSW RFS & NSW SES & Fire and Rescue NSW</div>
<div style="text-align: center; font-size: 14px;">@frnsw-nswrfs-nswses</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NSW RFS & NSW SES & Fire and Rescue NSW.
| Data | NSW RFS | NSW SES | Fire and Rescue NSW |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3248 | 3249 |
| Retweets | 275 | 2093 | 875 |
| Short tweets | 12 | 12 | 48 |
| Tweets kept | 2963 | 1143 | 2326 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1cxt6027/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @frnsw-nswrfs-nswses's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tjbhow2z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tjbhow2z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/frnsw-nswrfs-nswses')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
StanfordAIMI/covid-radbert | 6519862ea4beae4199cac025f38274eba429ee34 | 2022-07-19T04:21:14.000Z | [
"pytorch",
"bert",
"en",
"transformers",
"text-classification",
"uncased",
"radiology",
"biomedical",
"covid-19",
"covid19",
"license:mit"
] | text-classification | false | StanfordAIMI | null | StanfordAIMI/covid-radbert | 57 | 1 | transformers | 5,776 | ---
widget:
- text: "procedure: single ap view of the chest comparison: none findings: no surgical hardware nor tubes. lungs, pleura: low lung volumes, bilateral airspace opacities. no pneumothorax or pleural effusion. cardiovascular and mediastinum: the cardiomediastinal silhouette seems stable. impression: 1. patchy bilateral airspace opacities, stable, but concerning for multifocal pneumonia. 2. absence of other suspicions, the rest of the lungs seems fine."
- text: "procedure: single ap view of the chest comparison: none findings: No surgical hardware nor tubes. lungs, pleura: low lung volumes, bilateral airspace opacities. no pneumothorax or pleural effusion. cardiovascular and mediastinum: the cardiomediastinal silhouette seems stable. impression: 1. patchy bilateral airspace opacities, stable. 2. some areas are suggestive that pneumonia can not be excluded. 3. recommended to follow-up shortly and check if there are additional symptoms"
tags:
- text-classification
- pytorch
- transformers
- uncased
- radiology
- biomedical
- covid-19
- covid19
language:
- en
license: mit
---
COVID-RadBERT was trained to detect the presence or absence of COVID-19 within radiology reports, along an "uncertain" diagnostic when further medical tests are required. Manuscript in-proceedings. |
Kamrani/t5-large | aba0d1f6dc5c32b6e56a5c7c2733aa1aaf95d2f5 | 2022-07-27T13:13:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"fa",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Kamrani | null | Kamrani/t5-large | 57 | null | transformers | 5,777 | ---
language:
- en
- fa
tags:
- translation
license: apache-2.0
---
# Model Card for T5 Large

# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Large is the checkpoint with 770 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-Large, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-large")
model = T5Model.from_pretrained("t5-large")
input_ids = tokenizer(
"Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
|
Geotrend/bert-base-15lang-cased | 787fc26696471d329e073e1a473adcf94de8c309 | 2022-06-28T08:50:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"ar",
"ru",
"vi",
"el",
"bg",
"th",
"tr",
"hi",
"ur",
"sw",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-15lang-cased | 56 | 1 | transformers | 5,778 | ---
language:
- multilingual
- en
- fr
- es
- de
- zh
- ar
- ru
- vi
- el
- bg
- th
- tr
- hi
- ur
- sw
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-15lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
The measurements below have been computed on a [Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB)](https://cloud.google.com/compute/docs/machine-types\#n1_machine_type):
| Model | Num parameters | Size | Memory | Loading time |
| ------------------------------- | -------------- | -------- | -------- | ------------ |
| bert-base-multilingual-cased | 178 million | 714 MB | 1400 MB | 4.2 sec |
| Geotrend/bert-base-15lang-cased | 141 million | 564 MB | 1098 MB | 3.1 sec |
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur and sw.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-15lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-15lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
IDEA-CCNL/Randeng-MegatronT5-770M | d9f1391038fd66413915bb65036a99f2c5d52a24 | 2022-04-12T01:50:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"zh",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | IDEA-CCNL | null | IDEA-CCNL/Randeng-MegatronT5-770M | 56 | 5 | transformers | 5,779 | ---
language:
- zh
license: apache-2.0
inference: false
---
# Randeng-MegatronT5-770M model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
The 770 million parameter Randeng-770M large model, using 280G Chinese data, 16 A100 training for 14 days,which is a standard transformer structure.
## Usage
There is no structure of Randeng-MegatronT5-770M in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of Randeng-MegatronT5-770M from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
## Usage
```python
from fengshen import T5ForConditionalGeneration
from fengshen import T5Config
from fengshen import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
config = T5Config.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
model = T5ForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-MegatronT5-770M')
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Raychanan/chinese-roberta-wwm-ext-FineTuned | 980284db7a4b313adbfaf7504689a3668d22e829 | 2021-05-18T21:57:46.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Raychanan | null | Raychanan/chinese-roberta-wwm-ext-FineTuned | 56 | null | transformers | 5,780 | Entry not found |
adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717 | 5deecb5ec054e452417edb169c441b669e069908 | 2021-10-15T19:12:03.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:adrianmoses/autonlp-data-auto-nlp-lyrics-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | adrianmoses | null | adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717 | 56 | null | transformers | 5,781 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- adrianmoses/autonlp-data-auto-nlp-lyrics-classification
co2_eq_emissions: 88.89388195672073
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 19333717
- CO2 Emissions (in grams): 88.89388195672073
## Validation Metrics
- Loss: 1.0499154329299927
- Accuracy: 0.6207088513638894
- Macro F1: 0.46250803661544765
- Micro F1: 0.6207088513638894
- Weighted F1: 0.5850362079928957
- Macro Precision: 0.6451479987704787
- Micro Precision: 0.6207088513638894
- Weighted Precision: 0.6285080101186085
- Macro Recall: 0.4405680478429344
- Micro Recall: 0.6207088513638894
- Weighted Recall: 0.6207088513638894
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
flair/frame-english | 09e0348db9809c2e50bf7135a2102817f6155fbf | 2021-03-02T22:02:55.000Z | [
"pytorch",
"en",
"dataset:ontonotes",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | flair | null | flair/frame-english | 56 | null | flair | 5,782 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- ontonotes
widget:
- text: "George returned to Berlin to return his hat."
---
## English Verb Disambiguation in Flair (default model)
This is the standard verb disambiguation model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **89,34** (Ontonotes) - predicts [Proposition Bank verb frames](http://verbs.colorado.edu/propbank/framesets-english-aliases/).
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/frame-english")
# make example sentence
sentence = Sentence("George returned to Berlin to return his hat.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following frame tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('frame'):
print(entity)
```
This yields the following output:
```
Span [2]: "returned" [− Labels: return.01 (0.9951)]
Span [6]: "return" [− Labels: return.02 (0.6361)]
```
So, the word "*returned*" is labeled as **return.01** (as in *go back somewhere*) while "*return*" is labeled as **return.02** (as in *give back something*) in the sentence "*George returned to Berlin to return his hat*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus = ColumnCorpus(
"resources/tasks/srl", column_format={1: "text", 11: "frame"}
)
# 2. what tag do we want to predict?
tag_type = 'frame'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
BytePairEmbeddings("en"),
FlairEmbeddings("news-forward"),
FlairEmbeddings("news-backward"),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/frame-english',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2019flair,
title={FLAIR: An easy-to-use framework for state-of-the-art NLP},
author={Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland},
booktitle={{NAACL} 2019, 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)},
pages={54--59},
year={2019}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
gchhablani/bert-base-cased-finetuned-stsb | 56e68a16be5d3c7647447dc0512261fbe3dd9f99 | 2021-09-20T09:09:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:glue",
"arxiv:2105.03824",
"transformers",
"generated_from_trainer",
"fnet-bert-base-comparison",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/bert-base-cased-finetuned-stsb | 56 | null | transformers | 5,783 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: bert-base-cased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8897907271421561
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-stsb
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4861
- Pearson: 0.8926
- Spearmanr: 0.8898
- Combined Score: 0.8912
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name stsb \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-stsb \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Combined Score | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:--------------:|:---------------:|:-------:|:---------:|
| 1.1174 | 1.0 | 360 | 0.8816 | 0.5000 | 0.8832 | 0.8800 |
| 0.3835 | 2.0 | 720 | 0.8901 | 0.4672 | 0.8915 | 0.8888 |
| 0.2388 | 3.0 | 1080 | 0.8912 | 0.4861 | 0.8926 | 0.8898 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
google/tapas-large-finetuned-wikisql-supervised | 77ca7d982ca59b32a97ee35d211db21086d41403 | 2021-11-29T13:05:23.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikisql",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1709.00103",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-large-finetuned-wikisql-supervised | 56 | 1 | transformers | 5,784 | ---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- wikisql
---
# TAPAS large model fine-tuned on WikiSQL (in a supervised fashion)
his model has 2 versions which can be used. The default version corresponds to the `tapas_wikisql_sqa_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), and [WikiSQL](https://github.com/salesforce/WikiSQL). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wikisql_sqa_inter_masklm_large` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQA and WikiSQL.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WikiSQL dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 6.17164e-5, and a warmup
ratio of 0.1424. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1709-00103,
author = {Victor Zhong and
Caiming Xiong and
Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017},
url = {http://arxiv.org/abs/1709.00103},
archivePrefix = {arXiv},
eprint = {1709.00103},
timestamp = {Mon, 13 Aug 2018 16:48:41 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1709-00103.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggingtweets/abdi_smokes | 053c1e273c5ac057b02362253c37bccaaae753e4 | 2021-05-21T17:24:13.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/abdi_smokes | 56 | 1 | transformers | 5,785 | ---
language: en
thumbnail: https://www.huggingtweets.com/abdi_smokes/1618179038747/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1369684058037493763/sSdQ_pIn_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Mr. Dj💿 🤖 AI Bot </div>
<div style="font-size: 15px">@abdi_smokes bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@abdi_smokes's tweets](https://twitter.com/abdi_smokes).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3185 |
| Retweets | 172 |
| Short tweets | 790 |
| Tweets kept | 2223 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33jkx1b1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abdi_smokes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33g19os5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33g19os5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/abdi_smokes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/furrymicky | e5ddb65505694397a4c073b1f08fa9c0eeb19c6a | 2021-05-22T04:53:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/furrymicky | 56 | null | transformers | 5,786 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1339300525007835137/YpAMPovA_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Micky The Weirdo from Taranto 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@furrymicky bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@furrymicky's tweets](https://twitter.com/furrymicky).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>459</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>14</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>91</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>354</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/109l35nl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @furrymicky's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/q3uw2fui) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/q3uw2fui/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/furrymicky'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
hugorosen/flaubert_base_uncased-xnli-sts | 20c8c6c4bf023e1882ed8e1fb94b7a0a061db375 | 2021-11-30T15:36:33.000Z | [
"pytorch",
"flaubert",
"feature-extraction",
"fr",
"dataset:xnli",
"dataset:stsb_multi_mt",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | hugorosen | null | hugorosen/flaubert_base_uncased-xnli-sts | 56 | null | sentence-transformers | 5,787 | ---
pipeline_tag: sentence-similarity
language: fr
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- fr
datasets:
- xnli
- stsb_multi_mt
---
# hugorosen/flaubert_base_uncased-xnli-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Ceci est une phrase d'exemple", "Chaque phrase est convertie"]
model = SentenceTransformer('hugorosen/flaubert_base_uncased-xnli-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Ceci est une phrase d'exemple", "Chaque phrase est convertie"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hugorosen/flaubert_base_uncased-xnli-sts')
model = AutoModel.from_pretrained('hugorosen/flaubert_base_uncased-xnli-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
This model scores 76.9% on STS test (french)
## Training
### Pre-training
We use the pre-trained [flaubert/flaubert_base_uncased](https://huggingface.co/flaubert/flaubert_base_cased). Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
we fine-tune the model using a `CosineSimilarityLoss` on XNLI and STS dataset (french).
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: FlaubertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Fine-tuned for semantic similarity by Hugo Rosenkranz-costa.
Based on FlauBERT:
```
@InProceedings{le2020flaubert,
author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier},
title = {FlauBERT: Unsupervised Language Model Pre-training for French},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {2479--2490},
url = {https://www.aclweb.org/anthology/2020.lrec-1.302}
}
``` |
jegormeister/bert-base-dutch-cased-snli | d507d8dc67f5c40103aeebfd8552595ad806484d | 2021-08-16T09:10:25.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | jegormeister | null | jegormeister/bert-base-dutch-cased-snli | 56 | 1 | sentence-transformers | 5,788 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# bert-base-dutch-cased-snli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bert-base-dutch-cased-snli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bert-base-dutch-cased-snli')
model = AutoModel.from_pretrained('bert-base-dutch-cased-snli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bert-base-dutch-cased-snli)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "utils.CombEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 722,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lamhieu/distilbert-base-multilingual-cased-vietnamese-topicifier | 7b31114c03220acb89e26d7a0d20b051404cc713 | 2021-04-29T18:01:33.000Z | [
"pytorch",
"distilbert",
"text-classification",
"vi",
"transformers",
"vietnamese",
"topicifier",
"multilingual",
"tiny",
"license:mit"
] | text-classification | false | lamhieu | null | lamhieu/distilbert-base-multilingual-cased-vietnamese-topicifier | 56 | null | transformers | 5,789 | ---
language:
- vi
tags:
- vietnamese
- topicifier
- multilingual
- tiny
license:
- mit
pipeline_tag: text-classification
widget:
- text: "Đam mê của tôi là nhiếp ảnh"
---
# distilbert-base-multilingual-cased-vietnamese-topicifier
## About
Fine-tuning from `distilbert-base-multilingual-cased` with a tiny dataset about Vietnamese topics.
## Usage
Try entering a message to predict what topic is being discussed. For example:
```
# Photography
Đam mê của tôi là nhiếp ảnh
# World War I
Bạn đã từng nghe về cuộc đại thế chiến ?
```
## Other
The model was fine-tuning with a tiny dataset, don't use it for a product. |
mbien/wav2vec2-large-xlsr-polish | 5802268ce272423e6772368bd971c5a5104f3c6f | 2021-07-06T12:38:34.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pl",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mbien | null | mbien/wav2vec2-large-xlsr-polish | 56 | null | transformers | 5,790 | ---
language: pl
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: mbien/wav2vec2-large-xlsr-polish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pl
type: common_voice
args: pl
metrics:
- name: Test WER
type: wer
value: 23.01
---
# Wav2Vec2-Large-XLSR-53-Polish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Polish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("mbien/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("mbien/wav2vec2-large-xlsr-polish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Polish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mbien/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("mbien/wav2vec2-large-xlsr-polish")
model.to("cuda")
chars_to_ignore_regex = '[\—\…\,\?\.\!\-\;\:\"\“\„\%\‘\”\�\«\»\'\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 23.01 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1DvrFMoKp9h3zk_eXrJF2s4_TGDHh0tMc?usp=sharing) |
verloop/Hinglish-Bert | 12bc17d68b1a41fe2c7a0897c25573cdd77c69e8 | 2021-05-20T08:58:33.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | verloop | null | verloop/Hinglish-Bert | 56 | null | transformers | 5,791 | Entry not found |
yazdipour/text-to-sparql-t5-base | 7d5a294a801b6edfcadbf5118e7629ef9e5ad1ed | 2021-10-19T18:16:39.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-base | 56 | null | transformers | 5,792 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-base-2021-10-19_15-35_lastDS
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.3275993764400482
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-19_15-35_lastDS
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1310
- Gen Len: 19.0
- P: 0.5807
- R: 0.0962
- F1: 0.3276
- Score: 6.4533
- Bleu-precisions: [92.48113990507008, 85.38781447185119, 80.57856404313097, 77.37314727416516]
- Bleu-bp: 0.0770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| nan | 1.0 | 4807 | 0.1310 | 19.0 | 0.5807 | 0.0962 | 0.3276 | 6.4533 | [92.48113990507008, 85.38781447185119, 80.57856404313097, 77.37314727416516] | 0.0770 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
malmarjeh/transformer | bfc8db3880ac0abc7d947adba8177a7a410270d0 | 2022-06-29T13:49:39.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"ar",
"transformers",
"Transformer",
"MSA",
"Arabic Text Summarization",
"Arabic News Title Generation",
"Arabic Paraphrasing",
"autotrain_compatible"
] | text2text-generation | false | malmarjeh | null | malmarjeh/transformer | 56 | null | transformers | 5,793 | ---
language:
- ar
tags:
- Transformer
- MSA
- Arabic Text Summarization
- Arabic News Title Generation
- Arabic Paraphrasing
widget:
- text: "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
---
# An Arabic abstractive text summarization model
A Transformer-based encoder-decoder model which has been trained on a dataset of 384,764 paragraph-summary pairs.
More details on the training of this model will be released later.
The model can be used as follows:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
from arabert.preprocess import ArabertPreprocessor
model_name="malmarjeh/transformer"
preprocessor = ArabertPreprocessor(model_name="")
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
pipeline = pipeline("text2text-generation",model=model,tokenizer=tokenizer)
text = "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
text = preprocessor.preprocess(text)
result = pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=3,
repetition_penalty=3.0,
max_length=200,
length_penalty=1.0,
no_repeat_ngram_size = 3)[0]['generated_text']
result
>>> 'احتجاجات شعبية في طرابلس لليوم الثالث على التوالي'
```
## Contact:
**Mohammad Bani Almarjeh**: [Linkedin](https://www.linkedin.com/in/mohammad-bani-almarjeh/) | <[email protected]>
|
Marxav/frpron | 417c1c39b1b9ef2a8dd4cd44cd6b64a2cd5a3d67 | 2022-07-26T11:57:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"fr",
"dataset:Marxav/frpron",
"transformers"
] | text-generation | false | Marxav | null | Marxav/frpron | 56 | 1 | transformers | 5,794 | ---
language:
- fr
thumbnail: "url to a thumbnail used in social sharing"
tags:
- text-generation
datasets:
- Marxav/frpron
metrics:
- loss/eval
- perplexity
widget:
- text: "bonjour:"
- text: "salut, comment ça va:"
- text: "Louis XIII:"
- text: "anticonstitutionnellement:"
- text: "les animaux:"
inference:
parameters:
temperature: 0.01
return_full_text: True
---
# Fr-word to phonemic pronunciation
This model aims at predicting the syllabized phonemic pronunciation of the French words.
The generated pronunciation is:
* A text string made of International Phonetic Alphabet (IPA) characters;
* Phonemic (i.e. remains at the phoneme-level, not deeper);
* Syllabized (i.e. characters '.' and '‿' are used to identify syllabes).
Such pronunciation is used in the [French Wiktionary](https://fr.wiktionary.org/) in the {{pron|...|fr}} tag.
To use this model, simply give an input containing the word that you want to translate followed by ":", for example: "bonjour:". It will generate its predicted pronunciation, for example "bɔ̃.ʒuʁ".
This model remains experimental. Additional finetuning is needed for:
* [Homographs with different pronunciations](https://fr.wiktionary.org/wiki/Catégorie:Homographes_non_homophones_en_français),
* [French liaisons](https://en.wikipedia.org/wiki/Liaison_(French)),
* [Roman numerals](https://en.wikipedia.org/wiki/Roman_numerals).
The input length is currently limited to a maximum of 60 letters.
This work is derived from the [OTEANN paper](https://aclanthology.org/2021.sigtyp-1.1/) and [code](https://github.com/marxav/oteann3), which used [minGTP](https://github.com/karpathy/minGPT).
## More information on the model, dataset, hardware, environmental consideration:
### **The training data**
The dataset used for training this models comes from data of the [French Wiktionary](https://fr.wiktionary.org/).
### **The model**
The model is build on [gpt2](https://huggingface.co/gpt2) |
Helsinki-NLP/opus-mt-tc-big-gmq-en | eaa0e23392aded6a8a207d4522224a95532db343 | 2022-06-01T12:59:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tc",
"big",
"gmq",
"en",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-gmq-en | 56 | null | transformers | 5,795 | ---
language:
- da
- en
- fo
- gmq
- is
- nb
- nn
- false
- sv
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-gmq-en
results:
- task:
name: Translation dan-eng
type: translation
args: dan-eng
dataset:
name: flores101-devtest
type: flores_101
args: dan eng devtest
metrics:
- name: BLEU
type: bleu
value: 49.3
- task:
name: Translation isl-eng
type: translation
args: isl-eng
dataset:
name: flores101-devtest
type: flores_101
args: isl eng devtest
metrics:
- name: BLEU
type: bleu
value: 34.2
- task:
name: Translation nob-eng
type: translation
args: nob-eng
dataset:
name: flores101-devtest
type: flores_101
args: nob eng devtest
metrics:
- name: BLEU
type: bleu
value: 44.2
- task:
name: Translation swe-eng
type: translation
args: swe-eng
dataset:
name: flores101-devtest
type: flores_101
args: swe eng devtest
metrics:
- name: BLEU
type: bleu
value: 49.8
- task:
name: Translation isl-eng
type: translation
args: isl-eng
dataset:
name: newsdev2021.is-en
type: newsdev2021.is-en
args: isl-eng
metrics:
- name: BLEU
type: bleu
value: 30.4
- task:
name: Translation dan-eng
type: translation
args: dan-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: dan-eng
metrics:
- name: BLEU
type: bleu
value: 65.9
- task:
name: Translation fao-eng
type: translation
args: fao-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fao-eng
metrics:
- name: BLEU
type: bleu
value: 30.1
- task:
name: Translation isl-eng
type: translation
args: isl-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: isl-eng
metrics:
- name: BLEU
type: bleu
value: 53.3
- task:
name: Translation nno-eng
type: translation
args: nno-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nno-eng
metrics:
- name: BLEU
type: bleu
value: 56.1
- task:
name: Translation nob-eng
type: translation
args: nob-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nob-eng
metrics:
- name: BLEU
type: bleu
value: 60.2
- task:
name: Translation swe-eng
type: translation
args: swe-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: swe-eng
metrics:
- name: BLEU
type: bleu
value: 66.4
- task:
name: Translation isl-eng
type: translation
args: isl-eng
dataset:
name: newstest2021.is-en
type: wmt-2021-news
args: isl-eng
metrics:
- name: BLEU
type: bleu
value: 34.4
---
# opus-mt-tc-big-gmq-en
Neural machine translation model for translating from North Germanic languages (gmq) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): dan fao isl nno nob nor swe
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT gmq-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Han var synligt nervøs.",
"Inte ens Tom själv var övertygad."
]
model_name = "pytorch-models/opus-mt-tc-big-gmq-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# He was visibly nervous.
# Even Tom was not convinced.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-gmq-en")
print(pipe("Han var synligt nervøs."))
# expected output: He was visibly nervous.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| dan-eng | tatoeba-test-v2021-08-07 | 0.78292 | 65.9 | 10795 | 79684 |
| fao-eng | tatoeba-test-v2021-08-07 | 0.47467 | 30.1 | 294 | 1984 |
| isl-eng | tatoeba-test-v2021-08-07 | 0.68346 | 53.3 | 2503 | 19788 |
| nno-eng | tatoeba-test-v2021-08-07 | 0.69788 | 56.1 | 460 | 3524 |
| nob-eng | tatoeba-test-v2021-08-07 | 0.73524 | 60.2 | 4539 | 36823 |
| swe-eng | tatoeba-test-v2021-08-07 | 0.77665 | 66.4 | 10362 | 68513 |
| dan-eng | flores101-devtest | 0.72322 | 49.3 | 1012 | 24721 |
| isl-eng | flores101-devtest | 0.59616 | 34.2 | 1012 | 24721 |
| nob-eng | flores101-devtest | 0.68224 | 44.2 | 1012 | 24721 |
| swe-eng | flores101-devtest | 0.72042 | 49.8 | 1012 | 24721 |
| isl-eng | newsdev2021.is-en | 0.56709 | 30.4 | 2004 | 46383 |
| isl-eng | newstest2021.is-en | 0.57756 | 34.4 | 1000 | 22529 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:13:11 EEST 2022
* port machine: LM0-400-22516.local
|
facebook/wav2vec2-conformer-rope-large-100h-ft | 6750d7e0a42c140730f9a5ad15eb1fd865fb51a6 | 2022-06-15T08:16:47.000Z | [
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-conformer-rope-large-100h-ft | 56 | null | transformers | 5,796 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
---
# Wav2Vec2-Conformer-Large-100h with Rotary Position Embeddings
Wav2Vec2 Conformer with rotary position embeddings, pretrained on 960h hours of Librispeech and fine-tuned on **100 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-100h-ft")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-100h-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
``` |
Evelyn18/distilbert-base-uncased-finetuned-squad | 229e1dfe28d7fb0d369d100a2101f1204c27f34c | 2022-06-22T03:50:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-finetuned-squad | 56 | null | transformers | 5,797 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.5219 |
| No log | 2.0 | 10 | 4.9747 |
| No log | 3.0 | 15 | 4.5448 |
| No log | 4.0 | 20 | 4.1843 |
| No log | 5.0 | 25 | 3.8491 |
| No log | 6.0 | 30 | 3.6789 |
| No log | 7.0 | 35 | 3.5018 |
| No log | 8.0 | 40 | 3.4254 |
| No log | 9.0 | 45 | 3.4566 |
| No log | 10.0 | 50 | 3.4326 |
| No log | 11.0 | 55 | 3.5741 |
| No log | 12.0 | 60 | 3.5260 |
| No log | 13.0 | 65 | 3.7003 |
| No log | 14.0 | 70 | 3.7499 |
| No log | 15.0 | 75 | 3.7961 |
| No log | 16.0 | 80 | 3.8578 |
| No log | 17.0 | 85 | 3.9928 |
| No log | 18.0 | 90 | 4.0305 |
| No log | 19.0 | 95 | 4.0024 |
| No log | 20.0 | 100 | 4.0087 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nielsr/layoutlmv3-funsd-v2 | 301f7a9dc1416e286cb7219aff24630b06d0c09a | 2022-05-11T16:06:54.000Z | [
"pytorch",
"layoutlmv3",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | nielsr | null | nielsr/layoutlmv3-funsd-v2 | 56 | 1 | transformers | 5,798 | Entry not found |
emilylearning/cond_ft_birth_date_on_wiki_bio__prcnt_100__test_run_False | 70534a6aed98ce4575cf9f79fd2af024b3e24c00 | 2022-05-12T23:38:56.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_birth_date_on_wiki_bio__prcnt_100__test_run_False | 56 | null | transformers | 5,799 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.