modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-05 12:28:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-05 12:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog | Jeska | 2021-12-07T14:52:51Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog
This model is a fine-tuned version of [outputDA/checkpoint-7710](https://huggingface.co/outputDA/checkpoint-7710) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5025
- Accuracy: 0.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.9925 | 1.0 | 1320 | 3.0954 | 0.4223 |
| 2.5041 | 2.0 | 2640 | 1.9762 | 0.6563 |
| 1.8061 | 3.0 | 3960 | 1.3196 | 0.7952 |
| 1.0694 | 4.0 | 5280 | 0.9304 | 0.8510 |
| 0.6479 | 5.0 | 6600 | 0.6875 | 0.8821 |
| 0.4408 | 6.0 | 7920 | 0.5692 | 0.8976 |
| 0.2542 | 7.0 | 9240 | 0.5291 | 0.8949 |
| 0.1709 | 8.0 | 10560 | 0.5038 | 0.9059 |
| 0.1181 | 9.0 | 11880 | 0.4885 | 0.9049 |
| 0.0878 | 10.0 | 13200 | 0.4900 | 0.9049 |
| 0.0702 | 11.0 | 14520 | 0.4930 | 0.9086 |
| 0.0528 | 12.0 | 15840 | 0.4987 | 0.9113 |
| 0.0406 | 13.0 | 17160 | 0.5009 | 0.9113 |
| 0.0321 | 14.0 | 18480 | 0.5017 | 0.9104 |
| 0.0308 | 15.0 | 19800 | 0.5025 | 0.9077 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Seongkyu/bert-base-cased-finetuned-squad | Seongkyu | 2021-12-07T09:52:54Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0179 | 1.0 | 6194 | 0.9548 |
| 0.7277 | 2.0 | 12388 | 0.9717 |
| 0.507 | 3.0 | 18582 | 1.0458 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
akahana/indonesia-sentiment-roberta | akahana | 2021-12-07T04:26:11Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: "id"
widget:
- text: "dia orang yang baik ya bunds."
---
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-sentiment-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
``` |
NbAiLabArchive/test_NCC_OSCAR_style_98w | NbAiLabArchive | 2021-12-07T01:53:59Z | 4 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | Just for performing some experiments. Do not use. |
huggingtweets/eddiefisher24 | huggingtweets | 2021-12-06T23:41:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/eddiefisher24/1638834103068/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/913915780819013633/aE1adt7G_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Edward Fisher JR</div>
<div style="text-align: center; font-size: 14px;">@eddiefisher24</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Edward Fisher JR.
| Data | Edward Fisher JR |
| --- | --- |
| Tweets downloaded | 1339 |
| Retweets | 212 |
| Short tweets | 125 |
| Tweets kept | 1002 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26fekxoi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eddiefisher24's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/264vgsyc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/264vgsyc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eddiefisher24')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
philschmid/MiniLMv2-L6-H384-emotion | philschmid | 2021-12-06T19:59:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: MiniLMv2-L6-H384-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-emotion
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Accuracy: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.432 | 1.0 | 500 | 0.9992 | 0.6805 |
| 0.8073 | 2.0 | 1000 | 0.5437 | 0.846 |
| 0.4483 | 3.0 | 1500 | 0.3018 | 0.909 |
| 0.2833 | 4.0 | 2000 | 0.2412 | 0.915 |
| 0.2169 | 5.0 | 2500 | 0.2140 | 0.9215 |
| 0.1821 | 6.0 | 3000 | 0.2159 | 0.917 |
| 0.154 | 7.0 | 3500 | 0.2084 | 0.919 |
| 0.1461 | 8.0 | 4000 | 0.2047 | 0.92 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
philschmid/MiniLMv2-L12-H384-emotion | philschmid | 2021-12-06T18:00:12Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-emotion
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2069
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8745 | 1.0 | 1000 | 0.6673 | 0.81 |
| 0.3466 | 2.0 | 2000 | 0.2816 | 0.918 |
| 0.2201 | 3.0 | 3000 | 0.2367 | 0.9215 |
| 0.1761 | 4.0 | 4000 | 0.2069 | 0.925 |
| 0.1435 | 5.0 | 5000 | 0.2089 | 0.922 |
| 0.1454 | 6.0 | 6000 | 0.2168 | 0.923 |
| 0.1041 | 7.0 | 7000 | 0.2081 | 0.924 |
| 0.0953 | 8.0 | 8000 | 0.2133 | 0.9245 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
gokulkarthik/xlm-roberta-qa-chaii | gokulkarthik | 2021-12-06T15:50:08Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"en",
"ta",
"hi",
"dataset:squad",
"dataset:chaii",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language:
- en
- ta
- hi
datasets:
- squad
- chaii
widget:
- text: "அலுமினியத்தின் அணு எண் என்ன?"
context: "அலுமினியம் (ஆங்கிலம்: அலுமினியம்; வட அமெரிக்க ஆங்கிலம்: Aluminum) ஒரு வேதியியல் தனிமம் ஆகும். இதனுடைய அணு எண் 13 ஆகும். இது பூமியில் அதிகம் கிடைக்கும் உலோகங்களுள் ஒன்று. இது மின்சாரத்தையும் வெப்பத்தையும் கடத்த வல்லது. பாக்ஸைட் என்ற தாதுவில் இருந்து அலுமினியம் தயாரிக்கப்படுகிறது. இதன் வேதிக்குறியீடு Al ஆகும்."
- text: "ज्वाला गुट्टा की माँ का नाम क्या है?"
context: "ज्वाला गुट्टा (जन्म: 7 सितंबर 1983; वर्धा, महाराष्ट्र) एक भारतीय बैडमिंटन खिलाडी हैं। प्रारंभिक जीवन ज्वाला गुट्टा का जन्म 7 सितंबर 1983 को वर्धा, महाराष्ट्र में हुआ था। उनके पिता एम. क्रांति तेलुगु और मां येलन चीन से हैं। उनकी मां येलन गुट्टा पहली बार 1977 में अपने दादा जी के साथ भारत आई थीं। ज्वाला गुट्टा की प्रारंभिक पढ़ाई हैदराबाद से हुई और यहीं से उन्होंने बैडमिंटन खेलना भी शुरू किया। कॅरियर 10 साल की उम्र से ही ज्वाला गुट्टा ने एस.एम. आरिफ से ट्रेनिंग लेना शुरू कर दिया था। एस.एम. आरिफ भारत के जाने माने खेल प्रशिक्षक हैं जिन्हें द्रोणाचार्य अवार्ड से सम्मानित किया गया है। पहली बार 13 साल की उम्र में उन्होंने मिनी नेशनल बैडमिंटन चैंपियनशिप जीती थी। साल 2000 में ज्वाला गुट्टा ने 17 साल की उम्र में जूनियर नेशनल बैडमिंटन चैंपियनशिप जीती। इसी साल उन्होंने श्रुति कुरियन के साथ डबल्स में जोड़ी बनाते हुए महिलाओं के डबल्स जूनियर नेशनल बैडमिंटन चैंपियनशिप और सीनियर नेशनल बैडमिंटन चैंपियनशिप में जीत हासिल की। श्रुति कुरियन के साथ उनकी जोड़ी काफी लंबे समय तक चली। 2002 से 2008 तक लगातार सात बार ज्वाला गुट्टा ने महिलाओं के नेशनल युगल प्रतियोगिता में जीत हासिल की।"
- text: "How many bones do you have in your body?"
context: "A normal adult human skeleton consists of the following 206 (208 if the breast is thought to be three parts). This number can vary depending on the physiological differences. For example, in a very small number of humans, an extra rib (neck) or an extra lower spinal cord is found. There are 22 bones in the human skull (excluding the ear tendons), which are divided into eight cranium bones and 14 facial bones. (Thick numbers indicate the numbers seen in the nearby picture.) Bones (8) 1 frontal bone (2) 3 temporal bone (2) 4 occipital bone (4) Sphinoid bone (14) 7 mandible (6) maxilla (2) palatine bone (2) 5 zygotic bone (9) 9 nasal bone (2) The sacral vertebrae (4 or 5), in adults, form the sacral vertebrae (3 to 5), in adults they form the valve."
---
# XLM-RoBERTa for question answering in Indian languages
pre-trained XLM-Roberta with intermediate pre-training on SQUAD dataset (English) and fine tuning on Chaii dataset (Tamil, Hindi)
# How to use from the 🤗/transformers library
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("gokulkarthik/xlm-roberta-qa-chaii")
model = AutoModelForQuestionAnswering.from_pretrained("gokulkarthik/xlm-roberta-qa-chaii")
``` |
Ayham/xlnetgpt2_xsum7 | Ayham | 2021-12-06T13:13:12Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: xlnetgpt2_xsum7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnetgpt2_xsum7
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ncduy/marian-finetuned-kde4-en-to-fr | ncduy | 2021-12-06T08:46:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.8691179414982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.8691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tyoyo/t5-base-TEDxJP-1body-2context | tyoyo | 2021-12-06T08:37:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-1body-2context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-1body-2context
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4968
- Wer: 0.1969
- Mer: 0.1895
- Wil: 0.2801
- Wip: 0.7199
- Hits: 55902
- Substitutions: 6899
- Deletions: 3570
- Insertions: 2599
- Cer: 0.1727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.7136 | 1.0 | 746 | 0.5716 | 0.2512 | 0.2345 | 0.3279 | 0.6721 | 54430 | 7249 | 4692 | 4731 | 0.2344 |
| 0.6267 | 2.0 | 1492 | 0.5152 | 0.2088 | 0.2005 | 0.2917 | 0.7083 | 55245 | 6949 | 4177 | 2732 | 0.2009 |
| 0.5416 | 3.0 | 2238 | 0.4969 | 0.2025 | 0.1948 | 0.2851 | 0.7149 | 55575 | 6871 | 3925 | 2646 | 0.1802 |
| 0.5223 | 4.0 | 2984 | 0.4915 | 0.1989 | 0.1917 | 0.2816 | 0.7184 | 55652 | 6826 | 3893 | 2481 | 0.1754 |
| 0.4985 | 5.0 | 3730 | 0.4929 | 0.1991 | 0.1916 | 0.2814 | 0.7186 | 55759 | 6828 | 3784 | 2603 | 0.1753 |
| 0.4675 | 6.0 | 4476 | 0.4910 | 0.1969 | 0.1897 | 0.2799 | 0.7201 | 55834 | 6859 | 3678 | 2534 | 0.1756 |
| 0.445 | 7.0 | 5222 | 0.4940 | 0.1955 | 0.1884 | 0.2782 | 0.7218 | 55881 | 6821 | 3669 | 2485 | 0.1712 |
| 0.4404 | 8.0 | 5968 | 0.4932 | 0.1979 | 0.1903 | 0.2801 | 0.7199 | 55881 | 6828 | 3662 | 2643 | 0.1742 |
| 0.4525 | 9.0 | 6714 | 0.4951 | 0.1968 | 0.1893 | 0.2799 | 0.7201 | 55939 | 6897 | 3535 | 2632 | 0.1740 |
| 0.4077 | 10.0 | 7460 | 0.4968 | 0.1969 | 0.1895 | 0.2801 | 0.7199 | 55902 | 6899 | 3570 | 2599 | 0.1727 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
AlexMaclean/sentence-compression-roberta | AlexMaclean | 2021-12-06T04:22:17Z | 31 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentence-compression-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression-roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3465
- Accuracy: 0.8473
- F1: 0.6835
- Precision: 0.6835
- Recall: 0.6835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5312 | 1.0 | 50 | 0.5251 | 0.7591 | 0.0040 | 0.75 | 0.0020 |
| 0.4 | 2.0 | 100 | 0.4003 | 0.8200 | 0.5341 | 0.7113 | 0.4275 |
| 0.3355 | 3.0 | 150 | 0.3465 | 0.8473 | 0.6835 | 0.6835 | 0.6835 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetu-truncated-41f800 | diegor2 | 2021-12-06T00:23:37Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
metrics:
- bleu
model-index:
- name: t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16_en_ro_pre_processed
type: wmt16_en_ro_pre_processed
args: enro
metrics:
- name: Bleu
type: bleu
value: 0.0002
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4897
- Bleu: 0.0002
- Gen Len: 9.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 6.2585 | 1.0 | 76290 | 6.4897 | 0.0002 | 9.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Sancha/t5-small-finetuned-fi-to-en | Sancha | 2021-12-05T23:36:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt19",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt19
metrics:
- bleu
model-index:
- name: t5-small-finetuned-fi-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt19
type: wmt19
args: fi-en
metrics:
- name: Bleu
type: bleu
value: 1.2541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-fi-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5185
- Bleu: 1.2541
- Gen Len: 17.395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.413 | 1.0 | 6250 | 3.5378 | 1.2291 | 17.4057 |
| 3.342 | 2.0 | 12500 | 3.5185 | 1.2541 | 17.395 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
usc-isi/sbert-roberta-large-anli-mnli-snli | usc-isi | 2021-12-05T21:04:27Z | 8 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:anli",
"dataset:multi_nli",
"dataset:snli",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
language:
- en
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- anli
- multi_nli
- snli
---
# sbert-roberta-large-anli-mnli-snli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The model is weight initialized by RoBERTa-large and trained on ANLI (Nie et al., 2020), MNLI (Williams et al., 2018), and SNLI (Bowman et al., 2015) using the [`training_nli.py`](https://github.com/UKPLab/sentence-transformers/blob/v0.3.5/examples/training/nli/training_nli.py) example script.
Training Details:
- Learning rate: 2e-5
- Batch size: 8
- Pooling: Mean
- Training time: ~20 hours on one [NVIDIA GeForce RTX 2080 Ti](https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```bash
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("usc-isi/sbert-roberta-large-anli-mnli-snli")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (Hugging Face Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: first, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["This is an example sentence", "Each sentence is converted"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
model = AutoModel.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
See section 4.1 of our paper for evaluation results.
## Full Model Architecture
```text
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
For more information about the project, see our paper:
> Ciosici, Manuel, et al. "Machine-Assisted Script Curation." _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations_, Association for Computational Linguistics, 2021, pp. 8–17. _ACLWeb_, <https://www.aclweb.org/anthology/2021.naacl-demos.2>.
## References
- Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. [A large annotated corpus for learning natural language inference](https://doi.org/10.18653/v1/D15-1075). In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
- Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. [AdversarialNLI: A new benchmark for natural language understanding](https://doi.org/10.18653/v1/2020.acl-main.441). In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 4885–4901, Online. Association for Computational Linguistics.
- Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. [A broad-coverage challenge corpus for sentence understanding through inference](https://doi.org/10.18653/v1/N18-1101). In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_, pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
|
chandank/bart-base-finetuned-kaggglenews-fact-corrector-I | chandank | 2021-12-05T20:45:53Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-fact-corrector-I
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-fact-corrector-I
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 432 | 1.5483 | 28.9811 | 16.5711 | 24.7826 | 26.4132 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-fact-corrector-II | chandank | 2021-12-05T20:22:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-fact-corrector-II
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-fact-corrector-II
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 305 | 1.5749 | 27.9313 | 15.1004 | 23.3282 | 25.2336 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-baseline-final | chandank | 2021-12-05T18:45:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-baseline-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-baseline-final
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6942
- Rouge1: 28.581
- Rouge2: 16.3417
- Rougel: 24.1277
- Rougelsum: 25.9797
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.7514 | 27.911 | 15.7038 | 23.6466 | 25.2111 | 20.0 |
| 2.0585 | 2.0 | 990 | 1.6655 | 28.7581 | 16.4875 | 24.2669 | 26.1676 | 20.0 |
| 1.4173 | 3.0 | 1485 | 1.6942 | 28.581 | 16.3417 | 24.1277 | 25.9797 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sultan/ArabicTransformer-large | sultan | 2021-12-05T17:06:51Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"funnel",
"feature-extraction",
"arxiv:2006.03236",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ArabicTransformer Large model (B8-8-8 with decoder)
<b>Paper</b> : ArabicTransformer: Efficient Large Arabic Language Model with Funnel Transformer and ELECTRA Objective (EMNLP21)
<b>Abstract</b>
Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pretraining cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.
<b>Description</b>
This model was pre-trained on 44GB of Arabic corpora using [Funnel Transformer with ELECTRA objective](https://arxiv.org/abs/2006.03236). We will update you with more details about the model and our accepted paper later at EMNLP21. Check our GitHub page for the latest updates and examples: https://github.com/salrowili/ArabicTransformer
```bibtex
@inproceedings{alrowili-shanker-2021-arabictransformer-efficient,
title = "{A}rabic{T}ransformer: Efficient Large {A}rabic Language Model with Funnel Transformer and {ELECTRA} Objective",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.108",
pages = "1255--1261",
abstract = "Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.",
}
``` |
danielbispov/t5-small-finetuned-fi-to-en | danielbispov | 2021-12-05T16:40:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt19",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt19
metrics:
- bleu
model-index:
- name: t5-small-finetuned-fi-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt19
type: wmt19
args: fi-en
metrics:
- name: Bleu
type: bleu
value: 1.129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-fi-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5235
- Bleu: 1.129
- Gen Len: 17.088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:|
| 3.414 | 1.0 | 6250 | 3.5235 | 1.129 | 17.088 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
pierreguillou/byt5-small-qa-squad-v1.1-portuguese | pierreguillou | 2021-12-05T15:42:20Z | 50 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"byt5",
"qa",
"pt",
"dataset:squad",
"arxiv:1907.06292",
"arxiv:2105.13626",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: pt
license: apache-2.0
tags:
- text2text-generation
- byt5
- pytorch
- qa
datasets: squad
metrics: squad
widget:
- text: 'question: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."'
- text: 'question: "Onde foi descoberta a Covid-19?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."'
---
# ByT5 small finetuned for Question Answering (QA) on SQUaD v1.1 Portuguese

Check our other QA models in Portuguese finetuned on SQUAD v1.1:
- [Portuguese BERT base cased QA](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese)
- [Portuguese BERT large cased QA](https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese)
- [Portuguese T5 base QA](https://huggingface.co/pierreguillou/t5-base-qa-squad-v1.1-portuguese)
## Introduction
The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) on Google Colab from the language model [ByT5 small](https://huggingface.co/google/byt5-small) of Google.
## About ByT5
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
## Informations on the method used
All the informations are in the blog post : ...
## Notebooks in Google Colab & GitHub
- Google Colab: ...
- GitHub: ...
## Performance
The results obtained are the following:
```
f1 = ...
exact match = ...
```
## How to use the model... with Pipeline
```python
import transformers
from transformers import pipeline
model_name = 'pierreguillou/byt5-small-qa-squad-v1.1-portuguese'
nlp = pipeline("text2text-generation", model=model_name)
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
input_text = r"""
question: "Quando começou a pandemia de Covid-19 no mundo?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
"""
input_text = input_text.replace('\n','')
input_text
# question: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
result = nlp(input_text)
result
# [{'generated_text': '1 de dezembro de 2019'}]
```
## How to use the model... with the Auto classes
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'pierreguillou/byt5-small-qa-squad-v1.1-portuguese'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
input_text = r"""
question: "Quando começou a pandemia de Covid-19 no mundo?"
context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
"""
input_text = input_text.replace('\n','')
input_text
# question: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano."
input_ids = tokenizer(input_text, return_tensors='pt').input_ids
outputs = model.generate(
input_ids,
max_length=64,
num_beams=1
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
result
# 1 de dezembro de 2019
```
## Limitations and bias
The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases.
## Author
Portuguese ByT5 small QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations. In particular: [Google AI](https://huggingface.co/google), [Hugging Face](https://huggingface.co/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) and [Google Colab](https://colab.research.google.com/).
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierreguillou2021byt5smallsquadv11portuguese,
title={Portuguese ByT5 small QA (Question Answering), finetuned on SQUAD v1.1},
author={Pierre Guillou},
year={2021}
}
``` |
ying-tina/wav2vec2-base-timit-demo-colab-test | ying-tina | 2021-12-05T14:55:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-test
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4283
- Wer: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7386 | 4.0 | 500 | 2.2419 | 1.0 |
| 0.9366 | 8.0 | 1000 | 0.4789 | 0.4807 |
| 0.3118 | 12.0 | 1500 | 0.4197 | 0.3973 |
| 0.1784 | 16.0 | 2000 | 0.4216 | 0.3614 |
| 0.1297 | 20.0 | 2500 | 0.4298 | 0.3507 |
| 0.1091 | 24.0 | 3000 | 0.4365 | 0.3437 |
| 0.0819 | 28.0 | 3500 | 0.4283 | 0.3356 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Intel/bert-large-uncased-squadv1.1-sparse-90-unstructured | Intel | 2021-12-05T13:31:53Z | 95 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"arxiv:2111.05754",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
language: en
---
# 90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1
This model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation.
This model yields the following results on SQuADv1.1 development set:<br>
`{"exact_match": 83.56669820245979, "f1": 90.20829352733487}`
For further details see our paper, [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754), and our open source implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
megagonlabs/transformers-ud-japanese-electra-base-ginza-510 | megagonlabs | 2021-12-05T12:12:12Z | 15,942 | 2 | transformers | [
"transformers",
"pytorch",
"electra",
"feature-extraction",
"PyTorch",
"Transformers",
"spaCy",
"ELECTRA",
"GiNZA",
"mC4",
"UD_Japanese-BCCWJ",
"GSK2014-A",
"ja",
"MIT",
"arxiv:1910.10683",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- ja
thumbnail: "https://raw.githubusercontent.com/megagonlabs/ginza/static/docs/images/GiNZA_logo_4c_s.png"
tags:
- PyTorch
- Transformers
- spaCy
- ELECTRA
- GiNZA
- mC4
- UD_Japanese-BCCWJ
- GSK2014-A
- ja
- MIT
license: "mit"
datasets:
- mC4
- UD_Japanese_BCCWJ r2.8
- GSK2014-A(2019)
metrics:
- UAS
- LAS
- UPOS
---
# transformers-ud-japanese-electra-ginza-510 (sudachitra-wordpiece, mC4 Japanese)
This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences extracted from the [mC4](https://huggingface.co/datasets/mc4) and finetuned by [spaCy v3](https://spacy.io/usage/v3) on [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html).
The base pretrain model is [megagonlabs/transformers-ud-japanese-electra-base-discrimininator](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-discriminator).
The entire spaCy v3 model is distributed as a python package named [`ja_ginza_electra`](https://pypi.org/project/ja-ginza-electra/) from PyPI along with [`GiNZA v5`](https://github.com/megagonlabs/ginza) which provides some custom pipeline components to recognize the Japanese bunsetu-phrase structures.
Try running it as below:
```console
$ pip install ginza ja_ginza_electra
$ ginza
```
## Licenses
The models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
## Acknowledgments
This model is permitted to be published under the `MIT License` under a joint research agreement between NINJAL (National Institute for Japanese Language and Linguistics) and Megagon Labs Tokyo.
## Citations
- [mC4](https://huggingface.co/datasets/mc4)
Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
- [UD\_Japanese\_BCCWJ r2.8](https://universaldependencies.org/treebanks/ja_bccwj/index.html)
```
Asahara, M., Kanayama, H., Tanaka, T., Miyao, Y., Uematsu, S., Mori, S.,
Matsumoto, Y., Omura, M., & Murawaki, Y. (2018).
Universal Dependencies Version 2 for Japanese.
In LREC-2018.
```
- [GSK2014-A(2019)](https://www.gsk.or.jp/catalog/gsk2014-a/)
|
BigSalmon/MrLincoln12 | BigSalmon | 2021-12-04T21:32:35Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln12")
```
```
https://huggingface.co/spaces/BigSalmon/InformalToFormal
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
dee4hf/deeBERT | dee4hf | 2021-12-04T18:44:11Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | trying to create my first BERT model |
Mirelle/t5-small-finetuned-ro-to-en | Mirelle | 2021-12-04T18:09:52Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-ro-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 13.4499
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-ro-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5877
- Bleu: 13.4499
- Gen Len: 17.5073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.6167 | 0.05 | 2000 | 1.8649 | 9.7029 | 17.5753 |
| 1.4551 | 0.1 | 4000 | 1.7810 | 10.6382 | 17.5358 |
| 1.3723 | 0.16 | 6000 | 1.7369 | 11.1285 | 17.5158 |
| 1.3373 | 0.21 | 8000 | 1.7086 | 11.6173 | 17.5013 |
| 1.2935 | 0.26 | 10000 | 1.6890 | 12.0641 | 17.5038 |
| 1.2632 | 0.31 | 12000 | 1.6670 | 12.3012 | 17.5253 |
| 1.2463 | 0.37 | 14000 | 1.6556 | 12.3991 | 17.5153 |
| 1.2272 | 0.42 | 16000 | 1.6442 | 12.7392 | 17.4732 |
| 1.2052 | 0.47 | 18000 | 1.6328 | 12.8446 | 17.5143 |
| 1.1985 | 0.52 | 20000 | 1.6233 | 13.0892 | 17.4807 |
| 1.1821 | 0.58 | 22000 | 1.6153 | 13.1529 | 17.4952 |
| 1.1791 | 0.63 | 24000 | 1.6079 | 13.2964 | 17.5088 |
| 1.1698 | 0.68 | 26000 | 1.6038 | 13.3548 | 17.4842 |
| 1.154 | 0.73 | 28000 | 1.5957 | 13.3012 | 17.5053 |
| 1.1634 | 0.79 | 30000 | 1.5931 | 13.4203 | 17.5083 |
| 1.1487 | 0.84 | 32000 | 1.5893 | 13.3959 | 17.5123 |
| 1.1495 | 0.89 | 34000 | 1.5875 | 13.3745 | 17.4902 |
| 1.1458 | 0.94 | 36000 | 1.5877 | 13.4129 | 17.5043 |
| 1.1465 | 1.0 | 38000 | 1.5877 | 13.4499 | 17.5073 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
NbAiLabArchive/test_NCC_small_flax | NbAiLabArchive | 2021-12-04T17:46:54Z | 3 | 0 | transformers | [
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | Just for performing some experiments. Do not use. |
afreireosorio/opus-mt-en-de-finetuned-en-to-de | afreireosorio | 2021-12-04T17:43:39Z | 148 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-de-finetuned-en-to-de
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 26.4396
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6798
- Bleu: 26.4396
- Gen Len: 24.8156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 2.0864 | 1.0 | 568611 | 1.6798 | 26.4396 | 24.8156 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-batch8 | rossanez | 2021-12-04T14:31:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-batch8
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 10.039
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-batch8
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1282
- Bleu: 10.039
- Gen Len: 17.3839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 375 | 2.0912 | 9.9147 | 17.3084 |
| 1.5593 | 2.0 | 750 | 2.0858 | 9.9386 | 17.4299 |
| 1.4383 | 3.0 | 1125 | 2.1137 | 9.9804 | 17.34 |
| 1.3562 | 4.0 | 1500 | 2.1198 | 9.9685 | 17.367 |
| 1.3562 | 5.0 | 1875 | 2.1282 | 10.039 | 17.3839 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-nofp16 | rossanez | 2021-12-04T13:59:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-nofp16
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.5801
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-nofp16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1460
- Bleu: 9.5801
- Gen Len: 17.333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1899 | 9.4821 | 17.312 |
| No log | 2.0 | 376 | 2.1986 | 9.5705 | 17.3853 |
| 1.2118 | 3.0 | 564 | 2.1933 | 9.448 | 17.3293 |
| 1.2118 | 4.0 | 752 | 2.1607 | 9.563 | 17.336 |
| 1.2118 | 5.0 | 940 | 2.1460 | 9.5801 | 17.333 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-wd-01 | rossanez | 2021-12-04T13:43:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-wd-01
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.6027
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-wd-01
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0482
- Bleu: 9.6027
- Gen Len: 17.3776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.0502 | 9.3675 | 17.3983 |
| No log | 2.0 | 376 | 2.0590 | 9.4393 | 17.3869 |
| 1.6509 | 3.0 | 564 | 2.0639 | 9.3886 | 17.3806 |
| 1.6509 | 4.0 | 752 | 2.0498 | 9.5802 | 17.3846 |
| 1.6509 | 5.0 | 940 | 2.0482 | 9.6027 | 17.3776 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-epochs5 | rossanez | 2021-12-04T12:47:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-epochs5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 5.8913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-epochs5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2040
- Bleu: 5.8913
- Gen Len: 17.5408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.3366 | 2.8075 | 17.8188 |
| No log | 2.0 | 376 | 2.2557 | 4.8765 | 17.626 |
| 2.6928 | 3.0 | 564 | 2.2246 | 5.5454 | 17.5534 |
| 2.6928 | 4.0 | 752 | 2.2086 | 5.8511 | 17.5461 |
| 2.6928 | 5.0 | 940 | 2.2040 | 5.8913 | 17.5408 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
NbAiLabArchive/test_NCC_small_pytorch | NbAiLabArchive | 2021-12-04T12:45:02Z | 6 | 0 | transformers | [
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | Just for performing some experiments. Do not use. |
chandank/bart-base-finetuned-kaggglenews-batch8-LR2E6 | chandank | 2021-12-04T12:07:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-LR2E6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-LR2E6
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.7971 | 26.6141 | 13.9957 | 22.3012 | 23.7509 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-LR4 | chandank | 2021-12-04T11:53:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-LR4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-LR4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6037 | 28.1247 | 15.9399 | 23.8676 | 25.3739 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-LR1 | chandank | 2021-12-04T11:37:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-LR1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-LR1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6826 | 27.5191 | 15.0672 | 23.3065 | 24.7163 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Eyvaz/wav2vec2-base-russian-demo-kaggle | Eyvaz | 2021-12-04T11:00:23Z | 33 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-russian-demo-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-demo-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0102 | 1.03 | 500 | inf | 0.9997 |
| 0.0068 | 2.06 | 1000 | inf | 0.9997 |
| 0.0 | 3.09 | 1500 | inf | 0.9997 |
| 0.0313 | 4.12 | 2000 | inf | 0.9997 |
| 0.0 | 5.15 | 2500 | inf | 0.9997 |
| 0.0052 | 6.19 | 3000 | inf | 0.9997 |
| 0.0287 | 7.22 | 3500 | inf | 0.9997 |
| 0.0 | 8.25 | 4000 | inf | 0.9997 |
| 0.01 | 9.28 | 4500 | inf | 0.9997 |
| 0.0 | 10.31 | 5000 | inf | 0.9997 |
| 0.3919 | 11.34 | 5500 | inf | 0.9997 |
| 0.0 | 12.37 | 6000 | inf | 0.9997 |
| 0.0 | 13.4 | 6500 | inf | 0.9997 |
| 0.0 | 14.43 | 7000 | inf | 0.9997 |
| 0.6422 | 15.46 | 7500 | inf | 0.9997 |
| 0.0 | 16.49 | 8000 | inf | 0.9997 |
| 0.0 | 17.53 | 8500 | inf | 0.9997 |
| 0.0 | 18.56 | 9000 | inf | 0.9997 |
| 0.0 | 19.59 | 9500 | inf | 0.9997 |
| 0.0 | 20.62 | 10000 | inf | 0.9997 |
| 0.0427 | 21.65 | 10500 | inf | 0.9997 |
| 0.0 | 22.68 | 11000 | inf | 0.9997 |
| 0.0 | 23.71 | 11500 | inf | 0.9997 |
| 0.0 | 24.74 | 12000 | inf | 0.9997 |
| 0.0091 | 25.77 | 12500 | inf | 0.9997 |
| 0.1243 | 26.8 | 13000 | inf | 0.9997 |
| 0.0 | 27.83 | 13500 | inf | 0.9997 |
| 0.0 | 28.87 | 14000 | inf | 0.9997 |
| 0.0 | 29.9 | 14500 | inf | 0.9997 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
AlexMaclean/sentence-compression | AlexMaclean | 2021-12-04T08:10:24Z | 69 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentence-compression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2973
- Accuracy: 0.8912
- F1: 0.8367
- Precision: 0.8495
- Recall: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2686 | 1.0 | 10000 | 0.2667 | 0.8894 | 0.8283 | 0.8725 | 0.7884 |
| 0.2205 | 2.0 | 20000 | 0.2704 | 0.8925 | 0.8372 | 0.8579 | 0.8175 |
| 0.1476 | 3.0 | 30000 | 0.2973 | 0.8912 | 0.8367 | 0.8495 | 0.8243 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
eli/zero-shot-absa | eli | 2021-12-04T06:02:33Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # zero-shot-absa
## About
The goal of this project is to accomplish aspect-based sentiment analysis without dependence on the severely limited training data available - that is, the task of aspect-based sentiment analysis is not explicitly supervised, an approach known as “zero-shot learning”. Sentiment analysis has already been used extensively in industry for things such as customer feedback; however, a model such as the one I am proposing would be able to identify topics in a document and also identify the sentiment of the author toward (or associated with) each topic, which allows for detection of much more specific feedback or commentary than simple sentiment analysis.
## Details
There will be three models in the project; the first, m1, will use Latent Dirichlet Allocation to find topics in documents, implemented through gensim. The second, m2, is a zero-shot learning text classification model, available at Hugging Face, which I plan to fine-tune on output of the LDA model on various tweets and reviews. The final piece, m3, is the sentiment intensity analyzer available from NLTK’s vader module. The architecture is as follows: m1 will generate a list of topics for each document in the dataset. I will then create a mapping T from each document to the corresponding list of topics. It would be nice to have labeled data here that, given the output T(doc), supplies the human-generated topic name. Since that isn’t available, the zero-shot text classifier from Hugging Face will be used to generate a topic name, which exists only to interpret the output. Then for each topic t in T, we search the document for all sentences containing at least one word in t and use NLTK to compute the average sentiment score of each of these sentences. We then return, as the model output, the dictionary with all topic names found in the document as keys and the average sentiment from NLTK as the values.
## Dependencies
- `scikit-learn`
- `gensim`
- `NLTK`
- `huggingface.ai`
## Data
The data this project will be trained on come from Twitter and Yelp. With access to the Twitter API through a developer account, one can create a large corpus from tweets. Yelp has very relevant data for this task available at https://www.yelp.com/dataset. I will train / fine-tune each model twice, once for Twitter and once for Yelp, on a training set generated by scikit-learn.
Labeled data for testing are available at https://europe.naverlabs.com/Research/Natural-Language-Processing/Aspect-Based-Sentiment-Analysis-Dataset/ . These data are very straightforward to use, as they have annotations of topics and the associated sentiment scores for each sentence. |
marefa-nlp/marefa-ner | marefa-nlp | 2021-12-04T05:21:57Z | 2,850 | 23 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"ar",
"dataset:Marefa-NER",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z |
---
language: ar
datasets:
- Marefa-NER
widget:
- text: "في استاد القاهرة، بدأ حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم"
---
# Tebyan تبيـان
## Marefa Arabic Named Entity Recognition Model
## نموذج المعرفة لتصنيف أجزاء النص
<p align="center">
<img src="https://huggingface.co/marefa-nlp/marefa-ner/resolve/main/assets/marefa-tebyan-banner.png" alt="Marfa Arabic NER Model" width="600"/>
</p?
---------
**Version**: 1.3
**Last Update:** 3-12-2021
## Model description
**Marefa-NER** is a Large Arabic Named Entity Recognition (NER) model built on a completely new dataset and targets to extract up to 9 different types of entities
```
Person, Location, Organization, Nationality, Job, Product, Event, Time, Art-Work
```
نموذج المعرفة لتصنيف أجزاء النص. نموذج جديد كليا من حيث البيانات المستخدمة في تدريب النموذج.
كذلك يستهدف النموذج تصنيف حتى 9 أنواع مختلفة من أجزاء النص
```
شخص - مكان - منظمة - جنسية - وظيفة - منتج - حدث - توقيت - عمل إبداعي
```
## How to use كيف تستخدم النموذج
*You can test the model quickly by checking this [Colab notebook](https://colab.research.google.com/drive/1OGp9Wgm-oBM5BBhTLx6Qow4dNRSJZ-F5?usp=sharing)*
----
Install the following Python packages
`$ pip3 install transformers==4.8.0 nltk==3.5 protobuf==3.15.3 torch==1.9.0 `
> If you are using `Google Colab`, please restart your runtime after installing the packages.
-----------
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
import numpy as np
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
custom_labels = ["O", "B-job", "I-job", "B-nationality", "B-person", "I-person", "B-location","B-time", "I-time", "B-event", "I-event", "B-organization", "I-organization", "I-location", "I-nationality", "B-product", "I-product", "B-artwork", "I-artwork"]
def _extract_ner(text: str, model: AutoModelForTokenClassification,
tokenizer: AutoTokenizer, start_token: str="▁"):
tokenized_sentence = tokenizer([text], padding=True, truncation=True, return_tensors="pt")
tokenized_sentences = tokenized_sentence['input_ids'].numpy()
with torch.no_grad():
output = model(**tokenized_sentence)
last_hidden_states = output[0].numpy()
label_indices = np.argmax(last_hidden_states[0], axis=1)
tokens = tokenizer.convert_ids_to_tokens(tokenized_sentences[0])
special_tags = set(tokenizer.special_tokens_map.values())
grouped_tokens = []
for token, label_idx in zip(tokens, label_indices):
if token not in special_tags:
if not token.startswith(start_token) and len(token.replace(start_token,"").strip()) > 0:
grouped_tokens[-1]["token"] += token
else:
grouped_tokens.append({"token": token, "label": custom_labels[label_idx]})
# extract entities
ents = []
prev_label = "O"
for token in grouped_tokens:
label = token["label"].replace("I-","").replace("B-","")
if token["label"] != "O":
if label != prev_label:
ents.append({"token": [token["token"]], "label": label})
else:
ents[-1]["token"].append(token["token"])
prev_label = label
# group tokens
ents = [{"token": "".join(rec["token"]).replace(start_token," ").strip(), "label": rec["label"]} for rec in ents ]
return ents
model_cp = "marefa-nlp/marefa-ner"
tokenizer = AutoTokenizer.from_pretrained(model_cp)
model = AutoModelForTokenClassification.from_pretrained(model_cp, num_labels=len(custom_labels))
samples = [
"تلقى تعليمه في الكتاب ثم انضم الى الأزهر عام 1873م. تعلم على يد السيد جمال الدين الأفغاني والشيخ محمد عبده",
"بعد عودته إلى القاهرة، التحق نجيب الريحاني فرقة جورج أبيض، الذي كان قد ضمَّ - قُبيل ذلك - فرقته إلى فرقة سلامة حجازي . و منها ذاع صيته",
"في استاد القاهرة، قام حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم",
"من فضلك أرسل هذا البريد الى صديقي جلال الدين في تمام الساعة الخامسة صباحا في يوم الثلاثاء القادم",
"امبارح اتفرجت على مباراة مانشستر يونايتد مع ريال مدريد في غياب الدون كرستيانو رونالدو",
"لا تنسى تصحيني الساعة سبعة, و ضيف في الجدول اني احضر مباراة نادي النصر غدا",
]
# [optional]
samples = [ " ".join(word_tokenize(sample.strip())) for sample in samples if sample.strip() != "" ]
for sample in samples:
ents = _extract_ner(text=sample, model=model, tokenizer=tokenizer, start_token="▁")
print(sample)
for ent in ents:
print("\t",ent["token"],"==>",ent["label"])
print("========\n")
```
Output
```
تلقى تعليمه في الكتاب ثم انضم الى الأزهر عام 1873م . تعلم على يد السيد جمال الدين الأفغاني والشيخ محمد عبده
الأزهر ==> organization
عام 1873م ==> time
السيد جمال الدين الأفغاني ==> person
محمد عبده ==> person
========
بعد عودته إلى القاهرة، التحق نجيب الريحاني فرقة جورج أبيض، الذي كان قد ضمَّ - قُبيل ذلك - فرقته إلى فرقة سلامة حجازي . و منها ذاع صيته
القاهرة، ==> location
نجيب الريحاني ==> person
فرقة جورج أبيض، ==> organization
فرقة سلامة حجازي ==> organization
========
في استاد القاهرة، قام حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم
استاد القاهرة، ==> location
بطولة كأس الأمم الأفريقية ==> event
رئيس الجمهورية ==> job
رئيس ==> job
الاتحاد الدولي لكرة القدم ==> organization
========
من فضلك أرسل هذا البريد الى صديقي جلال الدين في تمام الساعة الخامسة صباحا في يوم الثلاثاء القادم
جلال الدين ==> person
الساعة الخامسة صباحا ==> time
يوم الثلاثاء القادم ==> time
========
امبارح اتفرجت على مباراة مانشستر يونايتد مع ريال مدريد في غياب الدون كرستيانو رونالدو
مانشستر يونايتد ==> organization
ريال مدريد ==> organization
كرستيانو رونالدو ==> person
========
لا تنسى تصحيني الساعة سبعة , و ضيف في الجدول اني احضر مباراة نادي النصر غدا
الساعة سبعة ==> time
نادي النصر ==> organization
غدا ==> time
========
```
## Fine-Tuning
Check this [notebook](https://colab.research.google.com/drive/1WUYrnmDFFEItqGMvbyjqZEJJqwU7xQR-?usp=sharing) to fine-tune the NER model
## Evaluation
We tested the model agains a test set of 1959 sentences. The results is in the follwing table
| type | f1-score | precision | recall | support |
|:-------------|-----------:|------------:|---------:|----------:|
| person | 0.93298 | 0.931479 | 0.934487 | 4335 |
| location | 0.891537 | 0.896926 | 0.886212 | 4939 |
| time | 0.873003 | 0.876087 | 0.869941 | 1853 |
| nationality | 0.871246 | 0.843153 | 0.901277 | 2350 |
| job | 0.837656 | 0.79912 | 0.880097 | 2477 |
| organization | 0.781317 | 0.773328 | 0.789474 | 2299 |
| event | 0.686695 | 0.733945 | 0.645161 | 744 |
| artwork | 0.653552 | 0.678005 | 0.630802 | 474 |
| product | 0.625483 | 0.553531 | 0.718935 | 338 |
| **weighted avg** | 0.859008 | 0.852365 | 0.86703 | 19809 |
| **micro avg** | 0.858771 | 0.850669 | 0.86703 | 19809 |
| **macro avg** | 0.79483 | 0.787286 | 0.806265 | 19809 |
## Acknowledgment شكر و تقدير
قام بإعداد البيانات التي تم تدريب النموذج عليها, مجموعة من المتطوعين الذين قضوا ساعات يقومون بتنقيح البيانات و مراجعتها
- على سيد عبد الحفيظ - إشراف
- نرمين محمد عطيه
- صلاح خيرالله
- احمد علي عبدربه
- عمر بن عبد العزيز سليمان
- محمد ابراهيم الجمال
- عبدالرحمن سلامه خلف
- إبراهيم كمال محمد سليمان
- حسن مصطفى حسن
- أحمد فتحي سيد
- عثمان مندو
- عارف الشريف
- أميرة محمد محمود
- حسن سعيد حسن
- عبد العزيز علي البغدادي
- واثق عبدالملك الشويطر
- عمرو رمضان عقل الحفناوي
- حسام الدين أحمد على
- أسامه أحمد محمد محمد
- حاتم محمد المفتي
- عبد الله دردير
- أدهم البغدادي
- أحمد صبري
- عبدالوهاب محمد محمد
- أحمد محمد عوض |
aseda/t5-small-finetuned-xsum | aseda | 2021-12-04T04:10:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
templates/text-classification | templates | 2021-12-04T03:29:21Z | 0 | 2 | generic | [
"generic",
"text-classification",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
library_name: generic
---
# Text Classification repository template
This is a template repository for Text Classification to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
## How to start
First create a repo in https://hf.co/new.
Then clone this template and push it to your repo.
```
git clone https://huggingface.co/templates/text-classification
cd text-classification
git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME
git push --force
``` |
marciovbarbosa/t5-small-finetuned-de-to-en-lr1e-4 | marciovbarbosa | 2021-12-04T02:55:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en-lr1e-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 11.427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en-lr1e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8228
- Bleu: 11.427
- Gen Len: 17.2674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 272 | 1.9605 | 9.0786 | 17.3148 |
| 2.3992 | 2.0 | 544 | 1.8884 | 10.1443 | 17.3301 |
| 2.3992 | 3.0 | 816 | 1.8647 | 10.4816 | 17.3258 |
| 2.0832 | 4.0 | 1088 | 1.8473 | 10.7396 | 17.3231 |
| 2.0832 | 5.0 | 1360 | 1.8343 | 11.0937 | 17.2621 |
| 1.9193 | 6.0 | 1632 | 1.8282 | 11.1303 | 17.3098 |
| 1.9193 | 7.0 | 1904 | 1.8234 | 11.2971 | 17.2991 |
| 1.8351 | 8.0 | 2176 | 1.8241 | 11.3433 | 17.2621 |
| 1.8351 | 9.0 | 2448 | 1.8224 | 11.394 | 17.2691 |
| 1.7747 | 10.0 | 2720 | 1.8228 | 11.427 | 17.2674 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
marciovbarbosa/t5-small-finetuned-de-to-en | marciovbarbosa | 2021-12-04T00:56:09Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.2166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9417
- Bleu: 9.2166
- Gen Len: 17.3404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 272 | 2.1660 | 3.8515 | 17.6289 |
| 2.6678 | 2.0 | 544 | 2.0656 | 6.4422 | 17.4842 |
| 2.6678 | 3.0 | 816 | 2.0203 | 7.4348 | 17.3741 |
| 2.4316 | 4.0 | 1088 | 1.9926 | 8.0914 | 17.3658 |
| 2.4316 | 5.0 | 1360 | 1.9739 | 8.6535 | 17.3461 |
| 2.3307 | 6.0 | 1632 | 1.9603 | 8.8757 | 17.3768 |
| 2.3307 | 7.0 | 1904 | 1.9509 | 9.0744 | 17.3511 |
| 2.2945 | 8.0 | 2176 | 1.9466 | 9.1111 | 17.3418 |
| 2.2945 | 9.0 | 2448 | 1.9427 | 9.1969 | 17.3351 |
| 2.2666 | 10.0 | 2720 | 1.9417 | 9.2166 | 17.3404 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
annafavaro/bert-base-uncased-finetuned-addresso | annafavaro | 2021-12-03T23:48:50Z | 34 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-addresso
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-addresso
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ffsouza/t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro | ffsouza | 2021-12-03T21:45:00Z | 38 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
model-index:
- name: t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
rtoguchi/t5-small-finetuned-en-to-ro-fp16_off-lr_2e-7-weight_decay_0.001 | rtoguchi | 2021-12-03T19:24:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-fp16_off-lr_2e-7-weight_decay_0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 4.7258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-fp16_off-lr_2e-7-weight_decay_0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4943
- Bleu: 4.7258
- Gen Len: 18.7149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.047 | 1.0 | 7629 | 1.4943 | 4.7258 | 18.7149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jenspt/byt5_ft_all_clean_data_lr_1e4 | jenspt | 2021-12-03T18:11:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
) |
NaliniK/distilbert-base-uncased-finetuned-cola | NaliniK | 2021-12-03T17:21:08Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5494735380761103
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8239
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5235 | 1.0 | 535 | 0.5402 | 0.4156 |
| 0.3484 | 2.0 | 1070 | 0.5272 | 0.5233 |
| 0.2381 | 3.0 | 1605 | 0.6665 | 0.5050 |
| 0.1746 | 4.0 | 2140 | 0.7512 | 0.5429 |
| 0.1308 | 5.0 | 2675 | 0.8239 | 0.5495 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chrommium/sbert_large-finetuned-sent_in_news_sents | chrommium | 2021-12-03T16:18:40Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7056
- Accuracy: 0.7301
- F1: 0.5210
## Model examples
Model responds to label X in news text. For exaple:
For 'Газпром отозвал лицензию у X, сообщает Финам' the model will return negative label -3
For 'X отозвал лицензию у Сбербанка, сообщает Финам' the model will return neutral label 0
For 'Газпром отозвал лицензию у Сбербанка, сообщает X' the model will return neutral label 0
For 'X демонстрирует высокую прибыль, сообщает Финам' the model will return positive label 1
## Simple example of News preprocessing for Russian before BERT
```
from natasha import (
Segmenter,
MorphVocab,
NewsEmbedding,
NewsMorphTagger,
NewsSyntaxParser,
NewsNERTagger,
PER,
NamesExtractor,
Doc
)
segmenter = Segmenter()
emb = NewsEmbedding()
morph_tagger = NewsMorphTagger(emb)
syntax_parser = NewsSyntaxParser(emb)
morph_vocab = MorphVocab()
### ----------------------------- key sentences block -----------------------------
def find_synax_tokens_with_order(doc, start, tokens, text_arr, full_str):
''' Находит все синтаксические токены, соответствующие заданному набору простых токенов (найденные
для определенной NER другими функциями).
Возвращает словарь найденных синтаксических токенов (ключ - идентификатор токена, состоящий
из номера предложения и номера токена внутри предложения).
Начинает поиск с указанной позиции в списке синтаксических токенов, дополнительно возвращает
позицию остановки, с которой нужно продолжить поиск следующей NER.
'''
found = []
in_str = False
str_candidate = ''
str_counter = 0
if len(text_arr) == 0:
return [], start
for i in range(start, len(doc.syntax.tokens)):
t = doc.syntax.tokens[i]
if in_str:
str_counter += 1
if str_counter < len(text_arr) and t.text == text_arr[str_counter]:
str_candidate += t.text
found.append(t)
if str_candidate == full_str:
return found, i+1
else:
in_str = False
str_candidate = ''
str_counter = 0
found = []
if t.text == text_arr[0]:
found.append(t)
str_candidate = t.text
if str_candidate == full_str:
return found, i+1
in_str = True
return [], len(doc.syntax.tokens)
def find_tokens_in_diap_with_order(doc, start_token, diap):
''' Находит все простые токены (без синтаксической информации), которые попадают в
указанный диапазон. Эти диапазоны мы получаем из разметки NER.
Возвращает набор найденных токенов и в виде массива токенов, и в виде массива строчек.
Начинает поиск с указанной позиции в строке и дополнительно возвращает позицию остановки.
'''
found_tokens = []
found_text = []
full_str = ''
next_i = 0
for i in range(start_token, len(doc.tokens)):
t = doc.tokens[i]
if t.start > diap[-1]:
next_i = i
break
if t.start in diap:
found_tokens.append(t)
found_text.append(t.text)
full_str += t.text
return found_tokens, found_text, full_str, next_i
def add_found_arr_to_dict(found, dict_dest):
for synt in found:
dict_dest.update({synt.id: synt})
return dict_dest
def make_all_syntax_dict(doc):
all_syntax = {}
for synt in doc.syntax.tokens:
all_syntax.update({synt.id: synt})
return all_syntax
def is_consiquent(id_1, id_2):
''' Проверяет идут ли токены друг за другом без промежутка по ключам. '''
id_1_list = id_1.split('_')
id_2_list = id_2.split('_')
if id_1_list[0] != id_2_list[0]:
return False
return int(id_1_list[1]) + 1 == int(id_2_list[1])
def replace_found_to(found, x_str):
''' Заменяет последовательность токенов NER на «заглушку». '''
prev_id = '0_0'
for synt in found:
if is_consiquent(prev_id, synt.id):
synt.text = ''
else:
synt.text = x_str
prev_id = synt.id
def analyze_doc(text):
''' Запускает Natasha для анализа документа. '''
doc = Doc(text)
doc.segment(segmenter)
doc.tag_morph(morph_tagger)
doc.parse_syntax(syntax_parser)
ner_tagger = NewsNERTagger(emb)
doc.tag_ner(ner_tagger)
return doc
def find_non_sym_syntax_short(entity_name, doc, add_X=False, x_str='X'):
''' Отыскивает заданную сущность в тексте, среди всех NER (возможно, в другой грамматической форме).
entity_name - сущность, которую ищем;
doc - документ, в котором сделан препроцессинг Natasha;
add_X - сделать ли замену сущности на «заглушку»;
x_str - текст замены.
Возвращает:
all_found_syntax - словарь всех подходящих токенов образующих искомые сущности, в котором
в случае надобности произведена замена NER на «заглушку»;
all_syntax - словарь всех токенов.
'''
all_found_syntax = {}
current_synt_number = 0
current_tok_number = 0
# идем по всем найденным NER
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
diap = range(span.start, span.stop)
# создаем словарь всех синтаксических элементов (ключ -- id из номера предложения и номера внутри предложения)
all_syntax = make_all_syntax_dict(doc)
# находим все простые токены внутри NER
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc, current_tok_number,
diap)
# по найденным простым токенам находим все синтаксические токены внутри данного NER
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens, found_text,
full_str)
# если текст NER совпадает с указанной сущностью, то делаем замену
if entity_name.find(span.normal) >= 0 or span.normal.find(entity_name) >= 0:
if add_X:
replace_found_to(found, x_str)
all_found_syntax = add_found_arr_to_dict(found, all_found_syntax)
return all_found_syntax, all_syntax
def key_sentences(all_found_syntax):
''' Находит номера предложений с искомой NER. '''
key_sent_numb = {}
for synt in all_found_syntax.keys():
key_sent_numb.update({synt.split('_')[0]: 1})
return key_sent_numb
def openinig_punct(x):
opennings = ['«', '(']
return x in opennings
def key_sentences_str(entitiy_name, doc, add_X=False, x_str='X', return_all=True):
''' Составляет окончательный текст, в котором есть только предложения, где есть ключевая сущность,
эта сущность, если указано, заменяется на «заглушку».
'''
all_found_syntax, all_syntax = find_non_sym_syntax_short(entitiy_name, doc, add_X, x_str)
key_sent_numb = key_sentences(all_found_syntax)
str_ret = ''
for s in all_syntax.keys():
if (s.split('_')[0] in key_sent_numb.keys()) or (return_all):
to_add = all_syntax[s]
if s in all_found_syntax.keys():
to_add = all_found_syntax[s]
else:
if to_add.rel == 'punct' and not openinig_punct(to_add.text):
str_ret = str_ret.rstrip()
str_ret += to_add.text
if (not openinig_punct(to_add.text)) and (to_add.text != ''):
str_ret += ' '
return str_ret
### ----------------------------- key entities block -----------------------------
def find_synt(doc, synt_id):
for synt in doc.syntax.tokens:
if synt.id == synt_id:
return synt
return None
def is_subj(doc, synt, recursion_list=[]):
''' Сообщает является ли слово подлежащим или частью сложного подлежащего. '''
if synt.rel == 'nsubj':
return True
if synt.rel == 'appos':
found_head = find_synt(doc, synt.head_id)
if found_head.id in recursion_list:
return False
return is_subj(doc, found_head, recursion_list + [synt.id])
return False
def find_subjects_in_syntax(doc):
''' Выдает словарик, в котором для каждой NER написано, является ли он
подлежащим в предложении.
Выдает стартовую позицию NER и было ли оно подлежащим (или appos)
'''
found_subjects = {}
current_synt_number = 0
current_tok_number = 0
for span in doc.spans:
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
found_subjects.update({span.start: 0})
diap = range(span.start, span.stop)
found_tokens, found_text, full_str, current_tok_number = find_tokens_in_diap_with_order(doc,
current_tok_number,
diap)
found, current_synt_number = find_synax_tokens_with_order(doc, current_synt_number, found_tokens,
found_text, full_str)
found_subjects.update({span.start: 0})
for synt in found:
if is_subj(doc, synt):
found_subjects.update({span.start: 1})
return found_subjects
def entity_weight(lst, c=1):
return c*lst[0]+lst[1]
def determine_subject(found_subjects, doc, new_agency_list, return_best=True, threshold=0.75):
''' Определяет ключевую NER и список самых важных NER, основываясь на том, сколько
раз каждая из них встречается в текста вообще и сколько раз в роли подлежащего '''
objects_arr = []
objects_arr_ners = []
should_continue = False
for span in doc.spans:
should_continue = False
span.normalize(morph_vocab)
if span.type != 'ORG':
continue
if span.normal in new_agency_list:
continue
for i in range(len(objects_arr)):
t, lst = objects_arr[i]
if t.find(span.normal) >= 0:
lst[0] += 1
lst[1] += found_subjects[span.start]
should_continue = True
break
if span.normal.find(t) >= 0:
objects_arr[i] = (span.normal, [lst[0]+1, lst[1]+found_subjects[span.start]])
should_continue = True
break
if should_continue:
continue
objects_arr.append((span.normal, [1, found_subjects[span.start]]))
objects_arr_ners.append(span.normal)
max_weight = 0
opt_ent = 0
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight < w:
max_weight = w
opt_ent = t
if not return_best:
return opt_ent, objects_arr_ners
bests = []
for obj in objects_arr:
t, lst = obj
w = entity_weight(lst)
if max_weight*threshold < w:
bests.append(t)
return opt_ent, bests
text = '''В офисах Сбера начали тестировать технологию помощи посетителям в экстренных ситуациях. «Зеленая кнопка» будет
в зонах круглосуточного обслуживания офисов банка в Воронеже, Санкт-Петербурге, Подольске, Пскове, Орле и Ярославле.
В них находятся стенды с сенсорными кнопками, обеспечивающие связь с операторами центра мониторинга службы безопасности
банка. Получив сигнал о помощи, оператор центра может подключиться к объекту по голосовой связи. С помощью камер
видеонаблюдения он оценит обстановку и при необходимости вызовет полицию или скорую помощь. «Зеленой кнопкой» можно
воспользоваться в нерабочее для отделения время, если возникла угроза жизни или здоровью. В остальных случаях помочь
клиентам готовы сотрудники отделения банка. «Одно из направлений нашей работы в области ESG и устойчивого развития
— это забота об обществе. И здоровье людей как высшая ценность является его основой. Поэтому задача банка в области
безопасности гораздо масштабнее, чем обеспечение только финансовой безопасности клиентов. Этот пилотный проект
приурочен к 180-летию Сбербанка: мы хотим, чтобы, приходя в банк, клиент чувствовал, что его жизнь и безопасность
— наша ценность», — отметил заместитель председателя правления Сбербанка Станислав Кузнецов.'''
doc = analyze_doc(text)
key_entity = determine_subject(find_subjects_in_syntax(doc), doc, [])[0]
text_for_model = key_sentences_str(key_entity, doc, add_X=True, x_str='X', return_all=False)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 176 | 0.9504 | 0.6903 | 0.2215 |
| No log | 2.0 | 352 | 0.9065 | 0.7159 | 0.4760 |
| 0.8448 | 3.0 | 528 | 0.9687 | 0.7045 | 0.4774 |
| 0.8448 | 4.0 | 704 | 1.2436 | 0.7045 | 0.4686 |
| 0.8448 | 5.0 | 880 | 1.4809 | 0.7273 | 0.4630 |
| 0.2074 | 6.0 | 1056 | 1.5866 | 0.7330 | 0.5185 |
| 0.2074 | 7.0 | 1232 | 1.7056 | 0.7301 | 0.5210 |
| 0.2074 | 8.0 | 1408 | 1.6982 | 0.7415 | 0.5056 |
| 0.0514 | 9.0 | 1584 | 1.8088 | 0.7273 | 0.5203 |
| 0.0514 | 10.0 | 1760 | 1.9250 | 0.7102 | 0.4879 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
jenspt/byt5_ft_all_clean_data | jenspt | 2021-12-03T13:32:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
#learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
) |
jenspt/byt5_ft_all_clean_data_ws3000 | jenspt | 2021-12-03T13:32:32Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
#per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=3000, # number of warmup steps for learning rate scheduler (used to be 500)
weight_decay=0.01, # strength of weight decay
#learning_rate=0.1e-3, # default = 5e-5=0.5e-4
logging_dir='./logs', # directory for storing logs
logging_steps=50,
#eval_steps = 100,
overwrite_output_dir = True,
save_strategy = 'epoch',
#logging_strategy = 'epoch',
) |
admin-63/eToro | admin-63 | 2021-12-03T13:23:09Z | 0 | 1 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | ♕〖𖡦الس௸اهر𖡦〗♕ |
danhsf/t5-small-finetuned-en-to-ro-lr_2e-3-fp_false | danhsf | 2021-12-03T09:19:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.1921
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr_2e-3-fp_false
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4239
- Bleu: 7.1921
- Gen Len: 18.2611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.8922 | 0.05 | 2000 | 1.7000 | 6.5274 | 18.2656 |
| 0.8621 | 0.1 | 4000 | 1.6409 | 6.6411 | 18.2311 |
| 0.8433 | 0.16 | 6000 | 1.6396 | 6.6601 | 18.2596 |
| 0.8297 | 0.21 | 8000 | 1.6304 | 6.7129 | 18.2581 |
| 0.8006 | 0.26 | 10000 | 1.6022 | 6.6067 | 18.2816 |
| 0.793 | 0.31 | 12000 | 1.5999 | 6.551 | 18.2631 |
| 0.774 | 0.37 | 14000 | 1.5586 | 6.7105 | 18.2661 |
| 0.7618 | 0.42 | 16000 | 1.5769 | 6.7278 | 18.2526 |
| 0.7463 | 0.47 | 18000 | 1.5625 | 6.6972 | 18.2201 |
| 0.7394 | 0.52 | 20000 | 1.5377 | 6.936 | 18.2491 |
| 0.7203 | 0.58 | 22000 | 1.5191 | 7.0205 | 18.2731 |
| 0.7158 | 0.63 | 24000 | 1.5055 | 6.835 | 18.2506 |
| 0.688 | 0.68 | 26000 | 1.4779 | 7.0534 | 18.2716 |
| 0.678 | 0.73 | 28000 | 1.4691 | 6.9735 | 18.2616 |
| 0.6677 | 0.79 | 30000 | 1.4702 | 7.0359 | 18.2496 |
| 0.6568 | 0.84 | 32000 | 1.4534 | 6.9982 | 18.2556 |
| 0.6475 | 0.89 | 34000 | 1.4427 | 7.0443 | 18.2466 |
| 0.6395 | 0.94 | 36000 | 1.4265 | 7.1205 | 18.2721 |
| 0.6319 | 1.0 | 38000 | 1.4239 | 7.1921 | 18.2611 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nateraw/resnet50-oxford-iiit-pet | nateraw | 2021-12-03T06:59:13Z | 82 | 0 | timm | [
"timm",
"pytorch",
"image-classification",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for resnet50-oxford-iiit-pet
 |
Doohae/roberta | Doohae | 2021-12-03T05:29:34Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | Model for Extraction-based MRC
original model : klue/roberta-large
Designed for ODQA Competition |
eliotm/t5-small-finetuned-en-to-ro-lr0.001 | eliotm | 2021-12-03T01:45:16Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-lr0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 5.8837
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-lr0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8309
- Bleu: 5.8837
- Gen Len: 18.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.9442 | 1.0 | 7629 | 1.8309 | 5.8837 | 18.2656 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
OscarNav/dialoGPT_translate | OscarNav | 2021-12-03T01:30:17Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | # Finetuned DialoGPT model for Eng-Spa translation
DialoGPT-small model was used and finetuned on English to Spanish translations, extracted from http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip
some examples of translations
| Role | Response |
| :---: |------------------------|
| User | please, sing me a song |
| Bot | Por favor, canta una canción. |
| User | I really want to go to China |
| Bot | Realmente quiero ir a China. |
| User | Can you do me a favor? |
| Bot | ¿Me puedes hacer un favor? |
| User | I don't know what you are talking about |
| Bot | No sé de qué estás hablando. |
| User | I don't want to go to China |
| Bot | No quiero ir a China. |
# Using the model
example code for trying out the model
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('OscarNav/dialoGPT_translate')
# Let's traslate 5 sentences
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
new_user_input_ids, max_length=1000,
pad_token_id=tokenizer.eos_token_id,
top_p=0.92, top_k = 50
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, new_user_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
aretw0/t5-small-finetuned-en-to-ro-dataset_20 | aretw0 | 2021-12-03T00:48:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-dataset_20
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3293
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
- Bleu: 7.3293
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6029 | 1.0 | 7629 | 1.4052 | 7.3293 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned32-en-to-de | alexrfelicio | 2021-12-02T22:39:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned32-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned32-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.4226 | 21.9554 | 17.8089 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned300-en-to-de | alexrfelicio | 2021-12-02T22:08:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned300-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned300-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.1454 | 14.2319 | 17.8329 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
alexrfelicio/t5-small-finetuned128-en-to-de | alexrfelicio | 2021-12-02T21:27:03Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: t5-small-finetuned128-en-to-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned128-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
fse/paragram-300-ws353 | fse | 2021-12-02T21:08:07Z | 0 | 0 | null | [
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- glove
- gensim
- fse
---
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on WordSim353 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
fse/paragram-300-sl999 | fse | 2021-12-02T21:03:05Z | 0 | 0 | null | [
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- glove
- gensim
- fse
---
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on SimLex999 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
gayanin/bart-mlm-pubmed-medterm | gayanin | 2021-12-02T20:51:43Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-medterm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-medterm
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge2 Precision: 0.985
- Rouge2 Recall: 0.7208
- Rouge2 Fmeasure: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0018 | 1.0 | 13833 | 0.0003 | 0.985 | 0.7208 | 0.8088 |
| 0.0014 | 2.0 | 27666 | 0.0006 | 0.9848 | 0.7207 | 0.8086 |
| 0.0009 | 3.0 | 41499 | 0.0002 | 0.9848 | 0.7207 | 0.8086 |
| 0.0007 | 4.0 | 55332 | 0.0002 | 0.985 | 0.7208 | 0.8088 |
| 0.0006 | 5.0 | 69165 | 0.0001 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 6.0 | 82998 | 0.0002 | 0.9846 | 0.7206 | 0.8086 |
| 0.0009 | 7.0 | 96831 | 0.0001 | 0.9848 | 0.7208 | 0.8087 |
| 0.0 | 8.0 | 110664 | 0.0000 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 9.0 | 124497 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
| 0.0 | 10.0 | 138330 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/angiejolielive | huggingtweets | 2021-12-02T20:17:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/angiejolielive/1638476268574/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/817164380081180673/TJnt3Lxe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Angelina Jolie</div>
<div style="text-align: center; font-size: 14px;">@angiejolielive</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Angelina Jolie.
| Data | Angelina Jolie |
| --- | --- |
| Tweets downloaded | 1118 |
| Retweets | 71 |
| Short tweets | 45 |
| Tweets kept | 1002 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fb12gam/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angiejolielive's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g9ynpkt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g9ynpkt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/angiejolielive')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fse/fasttext-wiki-news-subwords-300 | fse | 2021-12-02T20:13:10Z | 0 | 2 | null | [
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- glove
- gensim
- fse
---
# Fasttext
1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens).
Read more:
* https://fasttext.cc/docs/en/english-vectors.html
|
kuppuluri/telugu_bertu_ner | kuppuluri | 2021-12-02T18:15:04Z | 26 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | # Named Entity Recognition Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_ner',
labels=[
'B-PERSON', 'I-ORG', 'B-ORG', 'I-LOC', 'B-MISC',
'I-MISC', 'I-PERSON', 'B-LOC', 'O'
],
use_cuda=False,
args={"use_multiprocessing": False})
text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0004407190410447974
f1_score = 0.999519076627124
precision = 0.9994389677005691
recall = 0.9995991983967936
|
rtoguchi/t5-small-finetuned-en-to-ro-weight_decay_0.001 | rtoguchi | 2021-12-02T17:46:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-weight_decay_0.001
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3524
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-weight_decay_0.001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4509
- Bleu: 7.3524
- Gen Len: 18.2581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6488 | 1.0 | 7629 | 1.4509 | 7.3524 | 18.2581 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tyoyo/t5-base-TEDxJP-11body-0context | tyoyo | 2021-12-02T17:37:36Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-11body-0context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-11body-0context
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8068
- Wer: 0.1976
- Mer: 0.1904
- Wil: 0.2816
- Wip: 0.7184
- Hits: 602335
- Substitutions: 75050
- Deletions: 39435
- Insertions: 27185
- Cer: 0.1625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:-------------:|:---------:|:----------:|:------:|
| 0.8909 | 1.0 | 746 | 0.7722 | 0.3120 | 0.2861 | 0.3989 | 0.6011 | 558138 | 99887 | 58795 | 64983 | 0.2652 |
| 0.6786 | 2.0 | 1492 | 0.7021 | 0.2226 | 0.2122 | 0.3069 | 0.6931 | 592242 | 78773 | 45805 | 34978 | 0.1862 |
| 0.5627 | 3.0 | 2238 | 0.6996 | 0.2104 | 0.2016 | 0.2942 | 0.7058 | 597381 | 76593 | 42846 | 31392 | 0.1752 |
| 0.489 | 4.0 | 2984 | 0.7161 | 0.2030 | 0.1952 | 0.2865 | 0.7135 | 599808 | 75155 | 41857 | 28506 | 0.1684 |
| 0.4355 | 5.0 | 3730 | 0.7389 | 0.2000 | 0.1924 | 0.2837 | 0.7163 | 601815 | 75247 | 39758 | 28335 | 0.1651 |
| 0.3836 | 6.0 | 4476 | 0.7537 | 0.1992 | 0.1918 | 0.2829 | 0.7171 | 601846 | 75046 | 39928 | 27815 | 0.1640 |
| 0.3617 | 7.0 | 5222 | 0.7743 | 0.1995 | 0.1918 | 0.2832 | 0.7168 | 602287 | 75268 | 39265 | 28445 | 0.1642 |
| 0.3258 | 8.0 | 5968 | 0.7907 | 0.1971 | 0.1899 | 0.2809 | 0.7191 | 602800 | 74887 | 39133 | 27258 | 0.1620 |
| 0.3225 | 9.0 | 6714 | 0.8035 | 0.1981 | 0.1908 | 0.2823 | 0.7177 | 602418 | 75372 | 39030 | 27625 | 0.1630 |
| 0.3162 | 10.0 | 7460 | 0.8068 | 0.1976 | 0.1904 | 0.2816 | 0.7184 | 602335 | 75050 | 39435 | 27185 | 0.1625 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
fse/glove-wiki-gigaword-50 | fse | 2021-12-02T16:45:04Z | 0 | 1 | null | [
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-wiki-gigaword-300 | fse | 2021-12-02T16:44:23Z | 0 | 5 | null | [
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-wiki-gigaword-100 | fse | 2021-12-02T16:42:45Z | 0 | 1 | null | [
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
fse/glove-twitter-200 | fse | 2021-12-02T16:40:17Z | 0 | 1 | null | [
"glove",
"gensim",
"fse",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- glove
- gensim
- fse
---
# Glove Twitter
Pre-trained glove vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased.
Read more:
* https://nlp.stanford.edu/projects/glove/
* https://nlp.stanford.edu/pubs/glove.pdf
|
huggingtweets/jayalammar | huggingtweets | 2021-12-02T15:51:33Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/jayalammar/1638460288971/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1325460517922729984/xDO9dBt-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jay Alammar</div>
<div style="text-align: center; font-size: 14px;">@jayalammar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jay Alammar.
| Data | Jay Alammar |
| --- | --- |
| Tweets downloaded | 692 |
| Retweets | 198 |
| Short tweets | 35 |
| Tweets kept | 459 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wf3zug3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jayalammar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hq8g8xlh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hq8g8xlh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jayalammar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
chandank/bart-base-finetuned-kaggglenews-batch8-epochs3 | chandank | 2021-12-02T15:10:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-epochs3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-epochs3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5635
- Rouge1: 28.2335
- Rouge2: 16.0201
- Rougel: 24.0315
- Rougelsum: 25.647
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
| 1.5345 | 2.0 | 990 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
| 1.531 | 3.0 | 1485 | 1.5635 | 28.2335 | 16.0201 | 24.0315 | 25.647 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
emrecan/convbert-base-turkish-mc4-cased-allnli_tr | emrecan | 2021-12-02T14:57:01Z | 97 | 2 | transformers | [
"transformers",
"pytorch",
"convbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:05Z | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convbert-base-turkish-mc4-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/convbert-base-turkish-mc4-cased](https://huggingface.co/dbmdz/convbert-base-turkish-mc4-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5541
- Accuracy: 0.8111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7338 | 0.03 | 1000 | 0.6722 | 0.7236 |
| 0.603 | 0.07 | 2000 | 0.6465 | 0.7399 |
| 0.5605 | 0.1 | 3000 | 0.5801 | 0.7728 |
| 0.55 | 0.14 | 4000 | 0.5994 | 0.7626 |
| 0.529 | 0.17 | 5000 | 0.5720 | 0.7697 |
| 0.5196 | 0.2 | 6000 | 0.5692 | 0.7769 |
| 0.5117 | 0.24 | 7000 | 0.5725 | 0.7785 |
| 0.5044 | 0.27 | 8000 | 0.5532 | 0.7787 |
| 0.5016 | 0.31 | 9000 | 0.5546 | 0.7812 |
| 0.5031 | 0.34 | 10000 | 0.5461 | 0.7870 |
| 0.4949 | 0.37 | 11000 | 0.5725 | 0.7826 |
| 0.4894 | 0.41 | 12000 | 0.5419 | 0.7933 |
| 0.4796 | 0.44 | 13000 | 0.5278 | 0.7914 |
| 0.4795 | 0.48 | 14000 | 0.5193 | 0.7953 |
| 0.4713 | 0.51 | 15000 | 0.5534 | 0.7771 |
| 0.4738 | 0.54 | 16000 | 0.5098 | 0.8039 |
| 0.481 | 0.58 | 17000 | 0.5244 | 0.7958 |
| 0.4634 | 0.61 | 18000 | 0.5215 | 0.7972 |
| 0.465 | 0.65 | 19000 | 0.5129 | 0.7985 |
| 0.4624 | 0.68 | 20000 | 0.5062 | 0.8047 |
| 0.4597 | 0.71 | 21000 | 0.5114 | 0.8029 |
| 0.4571 | 0.75 | 22000 | 0.5070 | 0.8073 |
| 0.4602 | 0.78 | 23000 | 0.5115 | 0.7993 |
| 0.4552 | 0.82 | 24000 | 0.5085 | 0.8052 |
| 0.4538 | 0.85 | 25000 | 0.5118 | 0.7974 |
| 0.4517 | 0.88 | 26000 | 0.5036 | 0.8044 |
| 0.4517 | 0.92 | 27000 | 0.4930 | 0.8062 |
| 0.4413 | 0.95 | 28000 | 0.5307 | 0.7964 |
| 0.4483 | 0.99 | 29000 | 0.5195 | 0.7938 |
| 0.4036 | 1.02 | 30000 | 0.5238 | 0.8029 |
| 0.3724 | 1.05 | 31000 | 0.5125 | 0.8082 |
| 0.3777 | 1.09 | 32000 | 0.5099 | 0.8075 |
| 0.3753 | 1.12 | 33000 | 0.5172 | 0.8053 |
| 0.367 | 1.15 | 34000 | 0.5188 | 0.8053 |
| 0.3819 | 1.19 | 35000 | 0.5218 | 0.8046 |
| 0.363 | 1.22 | 36000 | 0.5202 | 0.7993 |
| 0.3794 | 1.26 | 37000 | 0.5240 | 0.8048 |
| 0.3749 | 1.29 | 38000 | 0.5026 | 0.8054 |
| 0.367 | 1.32 | 39000 | 0.5198 | 0.8075 |
| 0.3759 | 1.36 | 40000 | 0.5298 | 0.7993 |
| 0.3701 | 1.39 | 41000 | 0.5072 | 0.8091 |
| 0.3742 | 1.43 | 42000 | 0.5071 | 0.8098 |
| 0.3706 | 1.46 | 43000 | 0.5317 | 0.8037 |
| 0.3716 | 1.49 | 44000 | 0.5034 | 0.8052 |
| 0.3717 | 1.53 | 45000 | 0.5258 | 0.8012 |
| 0.3714 | 1.56 | 46000 | 0.5195 | 0.8050 |
| 0.3781 | 1.6 | 47000 | 0.5004 | 0.8104 |
| 0.3725 | 1.63 | 48000 | 0.5124 | 0.8113 |
| 0.3624 | 1.66 | 49000 | 0.5040 | 0.8094 |
| 0.3657 | 1.7 | 50000 | 0.4979 | 0.8111 |
| 0.3669 | 1.73 | 51000 | 0.4968 | 0.8100 |
| 0.3636 | 1.77 | 52000 | 0.5075 | 0.8079 |
| 0.36 | 1.8 | 53000 | 0.4985 | 0.8110 |
| 0.3624 | 1.83 | 54000 | 0.5125 | 0.8070 |
| 0.366 | 1.87 | 55000 | 0.4918 | 0.8117 |
| 0.3655 | 1.9 | 56000 | 0.5051 | 0.8109 |
| 0.3609 | 1.94 | 57000 | 0.5083 | 0.8105 |
| 0.3672 | 1.97 | 58000 | 0.5129 | 0.8085 |
| 0.3545 | 2.0 | 59000 | 0.5467 | 0.8109 |
| 0.2938 | 2.04 | 60000 | 0.5635 | 0.8049 |
| 0.29 | 2.07 | 61000 | 0.5781 | 0.8041 |
| 0.2992 | 2.11 | 62000 | 0.5470 | 0.8077 |
| 0.2957 | 2.14 | 63000 | 0.5765 | 0.8073 |
| 0.292 | 2.17 | 64000 | 0.5472 | 0.8106 |
| 0.2893 | 2.21 | 65000 | 0.5590 | 0.8085 |
| 0.2883 | 2.24 | 66000 | 0.5535 | 0.8064 |
| 0.2923 | 2.28 | 67000 | 0.5508 | 0.8095 |
| 0.2868 | 2.31 | 68000 | 0.5679 | 0.8098 |
| 0.2892 | 2.34 | 69000 | 0.5660 | 0.8057 |
| 0.292 | 2.38 | 70000 | 0.5494 | 0.8088 |
| 0.286 | 2.41 | 71000 | 0.5653 | 0.8085 |
| 0.2939 | 2.45 | 72000 | 0.5673 | 0.8070 |
| 0.286 | 2.48 | 73000 | 0.5600 | 0.8092 |
| 0.2844 | 2.51 | 74000 | 0.5508 | 0.8095 |
| 0.2913 | 2.55 | 75000 | 0.5645 | 0.8088 |
| 0.2859 | 2.58 | 76000 | 0.5677 | 0.8095 |
| 0.2892 | 2.62 | 77000 | 0.5598 | 0.8113 |
| 0.2898 | 2.65 | 78000 | 0.5618 | 0.8096 |
| 0.2814 | 2.68 | 79000 | 0.5664 | 0.8103 |
| 0.2917 | 2.72 | 80000 | 0.5484 | 0.8122 |
| 0.2907 | 2.75 | 81000 | 0.5522 | 0.8116 |
| 0.2896 | 2.79 | 82000 | 0.5540 | 0.8093 |
| 0.2907 | 2.82 | 83000 | 0.5469 | 0.8104 |
| 0.2882 | 2.85 | 84000 | 0.5471 | 0.8122 |
| 0.2878 | 2.89 | 85000 | 0.5532 | 0.8108 |
| 0.2858 | 2.92 | 86000 | 0.5511 | 0.8115 |
| 0.288 | 2.96 | 87000 | 0.5491 | 0.8111 |
| 0.2834 | 2.99 | 88000 | 0.5541 | 0.8111 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8-epochs10 | chandank | 2021-12-02T12:42:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews-batch8-epochs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8-epochs10
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5763
- Rouge1: 28.693
- Rouge2: 16.666
- Rougel: 24.2361
- Rougelsum: 26.0289
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6043 | 27.8611 | 15.8713 | 23.8365 | 25.378 | 20.0 |
| 1.9054 | 2.0 | 990 | 1.5613 | 28.2715 | 16.3724 | 24.3212 | 25.8499 | 20.0 |
| 1.651 | 3.0 | 1485 | 1.5394 | 28.6282 | 16.2976 | 24.2336 | 25.9434 | 20.0 |
| 1.4955 | 4.0 | 1980 | 1.5438 | 28.9266 | 16.7257 | 24.61 | 26.443 | 20.0 |
| 1.4034 | 5.0 | 2475 | 1.5449 | 28.2296 | 16.1292 | 23.9698 | 25.651 | 20.0 |
| 1.3077 | 6.0 | 2970 | 1.5642 | 28.4486 | 16.3833 | 24.1629 | 26.0013 | 20.0 |
| 1.2505 | 7.0 | 3465 | 1.5566 | 28.5469 | 16.5374 | 24.2966 | 25.962 | 20.0 |
| 1.2027 | 8.0 | 3960 | 1.5730 | 28.7278 | 16.6442 | 24.2531 | 26.1171 | 20.0 |
| 1.1571 | 9.0 | 4455 | 1.5690 | 28.7736 | 16.7491 | 24.3066 | 26.1439 | 20.0 |
| 1.1237 | 10.0 | 4950 | 1.5763 | 28.693 | 16.666 | 24.2361 | 26.0289 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
chandank/bart-base-finetuned-kaggglenews-batch8 | chandank | 2021-12-02T09:16:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-kaggglenews-batch8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6409 | 27.9647 | 15.4352 | 23.611 | 25.107 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeska/VaccinChatSentenceClassifierDutch_fromBERTjeDIAL | Jeska | 2021-12-02T08:29:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
This model is a fine-tuned version of [Jeska/BertjeWDialDataQA20k](https://huggingface.co/Jeska/BertjeWDialDataQA20k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Accuracy: 0.6322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4418 | 1.0 | 1457 | 2.3866 | 0.5406 |
| 1.7742 | 2.0 | 2914 | 1.9365 | 0.6069 |
| 1.1313 | 3.0 | 4371 | 1.8355 | 0.6322 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
LzLzLz/Bert | LzLzLz | 2021-12-02T06:50:05Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:04Z | It's a sentiment inference model base on bert. |
Akari/albert-base-v2-finetuned-squad | Akari | 2021-12-02T05:36:13Z | 51 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 8248 | 0.8813 |
| 0.6333 | 2.0 | 16496 | 0.8042 |
| 0.4372 | 3.0 | 24744 | 0.9492 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
huggingtweets/kaikothesharko | huggingtweets | 2021-12-02T04:58:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kaikothesharko/1638421086822/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1463379249578987527/OUX9AGXt_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kaiko TF (RAFFLE IN PINNED)</div>
<div style="text-align: center; font-size: 14px;">@kaikothesharko</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Kaiko TF (RAFFLE IN PINNED).
| Data | Kaiko TF (RAFFLE IN PINNED) |
| --- | --- |
| Tweets downloaded | 2169 |
| Retweets | 259 |
| Short tweets | 529 |
| Tweets kept | 1381 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18zt3o3w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kaikothesharko's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ajrcjpz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ajrcjpz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kaikothesharko')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
chopey/testmntdv | chopey | 2021-12-02T02:48:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | Test English-Dhivehi/Dhivehi-English NMT
Would need a lot more data to get accurate translations. |
huggingtweets/afm_marketing | huggingtweets | 2021-12-02T01:51:26Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1216156392/afm-marketing_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">AFM Marketing</div>
<div style="text-align: center; font-size: 14px;">@afm_marketing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from AFM Marketing.
| Data | AFM Marketing |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 1051 |
| Short tweets | 64 |
| Tweets kept | 2123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6tgdc3wa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afm_marketing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36mudapr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36mudapr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/afm_marketing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/FormalRobertaaa | BigSalmon | 2021-12-02T00:23:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | https://huggingface.co/spaces/BigSalmon/MASK2 |
BigSalmon/FormalBerta3 | BigSalmon | 2021-12-02T00:20:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | https://huggingface.co/spaces/BigSalmon/MASK2 |
BigSalmon/FormalRobertaa | BigSalmon | 2021-12-02T00:19:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | https://huggingface.co/spaces/BigSalmon/MASK2 |
emrecan/bert-base-multilingual-cased-multinli_tr | emrecan | 2021-12-01T19:45:01Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:05Z | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/convbert-base-turkish-mc4-cased-snli_tr | emrecan | 2021-12-01T19:43:30Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"convbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:05Z | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
hankzhong/electra-small-discriminator-finetuned-squad | hankzhong | 2021-12-01T19:04:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-small-discriminator-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-squad
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5751 | 1.0 | 2767 | 1.3952 |
| 1.2939 | 2.0 | 5534 | 1.2458 |
| 1.1866 | 3.0 | 8301 | 1.2174 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Narrativaai/deberta-v3-small-finetuned-hate_speech18 | Narrativaai | 2021-12-01T17:41:13Z | 9 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:hate_speech18",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- hate_speech18
widget:
- text: "ok, so do we need to kill them too or are the slavs okay ? for some reason whenever i hear the word slav , the word slobber comes to mind and i picture a slobbering half breed creature like the humpback of notre dame or Igor haha"
metrics:
- accuracy
model-index:
- name: deberta-v3-small-hate-speech
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: hate_speech18
type: hate_speech18
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.916058394160584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 small fine-tuned on hate_speech18 dataset for Hate Speech Detection
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the hate_speech18 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2922
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4147 | 1.0 | 650 | 0.3910 | 0.8832 |
| 0.2975 | 2.0 | 1300 | 0.2922 | 0.9161 |
| 0.2575 | 3.0 | 1950 | 0.3555 | 0.9051 |
| 0.1553 | 4.0 | 2600 | 0.4263 | 0.9124 |
| 0.1267 | 5.0 | 3250 | 0.4238 | 0.9161 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/binance | huggingtweets | 2021-12-01T14:02:42Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/binance/1638367358099/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1466001345324875784/4RrjsTR__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Binance</div>
<div style="text-align: center; font-size: 14px;">@binance</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Binance.
| Data | Binance |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 268 |
| Short tweets | 353 |
| Tweets kept | 2629 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/m31ml960/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @binance's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vx6m0ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vx6m0ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/binance')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
wandemberg-eld/opus-mt-en-de-finetuned-en-to-de | wandemberg-eld | 2021-12-01T12:49:07Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-de-finetuned-en-to-de
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: de-en
metrics:
- name: Bleu
type: bleu
value: 29.4312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4083
- Bleu: 29.4312
- Gen Len: 24.746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 1.978 | 1.0 | 568611 | 1.4083 | 29.4312 | 24.746 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-256 | rossanez | 2021-12-01T11:08:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.2663 | 4.5343 | 17.698 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-small-finetuned-de-en-64 | rossanez | 2021-12-01T11:02:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-small-finetuned-de-en-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.3808 | 3.1482 | 17.8019 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rossanez/t5-base-finetuned-de-en | rossanez | 2021-12-01T10:55:50Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt14",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
model-index:
- name: t5-base-finetuned-de-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-de-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.4324 | 1.2308 | 17.8904 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab-32 | ying-tina | 2021-12-01T10:54:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-timit-demo-colab-32
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-32
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4488
- Wer: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6155 | 4.0 | 500 | 2.2647 | 0.9992 |
| 0.9037 | 8.0 | 1000 | 0.4701 | 0.4336 |
| 0.3159 | 12.0 | 1500 | 0.4247 | 0.3575 |
| 0.1877 | 16.0 | 2000 | 0.4477 | 0.3442 |
| 0.1368 | 20.0 | 2500 | 0.4932 | 0.3384 |
| 0.1062 | 24.0 | 3000 | 0.4758 | 0.3202 |
| 0.0928 | 28.0 | 3500 | 0.4488 | 0.3149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
emrecan/bert-base-turkish-cased-multinli_tr | emrecan | 2021-12-01T10:45:51Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:05Z | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
Subsets and Splits