modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Amloii/gpt2-reviewspanish | 81b25b23c54080f38a2cc51b417b9b10332e4440 | 2022-05-19T08:28:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"es",
"dataset:amazon_reviews_multi",
"transformers",
"GPT-2",
"Spanish",
"review",
"fake",
"license:mit"
] | text-generation | false | Amloii | null | Amloii/gpt2-reviewspanish | 2 | 0 | transformers | 25,700 | ---
language: es
tags:
- GPT-2
- Spanish
- review
- fake
datasets:
- amazon_reviews_multi
widget:
- text: "Me ha gustado su"
example_title: "Positive review"
- text: "No quiero"
example_title: "Negative review"
license: mit
---
# GPT-2 - reviewspanish
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of text data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
In our case, we created a fined-tunned model of [Spanish GTP-2](https://huggingface.co/DeepESP/gpt2-spanish) combined with
the spanish reviews of Amazon from the HG dataset [Amazon-reviews-multi](https://huggingface.co/datasets/amazon_reviews_multi).
With this strategy, we obtain a model for text generation able to create realistic product reviews, useful for bot detection in
fake reviews.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation',
model='Amloii/gpt2-reviewspanish',
tokenizer='Amloii/gpt2-reviewspanish')
set_seed(42)
generator("Me ha gustado su", max_length=30, num_return_sequences=5)
[{'generated_text': 'Me ha gustado su tamaño y la flexibilidad de las correas, al ser de plastico las hebillas que lleva para sujetar las cadenas me han quitado el'},
{'generated_text': 'Me ha gustado su color y calidad. Lo peor de todo, es que las gafas no se pegan nada. La parte de fuera es finita'},
{'generated_text': 'Me ha gustado su rapidez y los ajustes de la correa, lo único que para mí, es poco manejable. Además en el bolso tiene una goma'},
{'generated_text': 'Me ha gustado su diseño y las dimensiones, pero el material es demasiado duro. Se nota bastante el uso pero me parece un poco caro para lo que'},
{'generated_text': 'Me ha gustado su aspecto aunque para lo que yo lo quería no me ha impresionado mucho. Las hojas tienen un tacto muy agradable que hace que puedas'}]
```
|
manueltonneau/bert-twitter-es-is-unemployed | 01b3d65fcfff52cabad2230ee973b50c3d546a2d | 2022-04-26T16:02:53.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-es-is-unemployed | 2 | null | transformers | 25,701 | ---
language: es # <-- my language
widget:
- text: "No tengo trabajo"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Is Unemployed (1), else (0)
- country: MX
- language: Spanish
- architecture: BERT base
## Model description
This model is a version of `dccuchile/bert-base-spanish-wwm-cased` finetuned to recognize Spanish tweets where a user mentions that she is unemployed. It was trained on Spanish tweets from users based in Mexico. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user is currently unemployed (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Spanish tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
0x12/t5small-news_commentary-en-zh | c9936638213fbaed74fe497ca5c860319c0677bb | 2022-04-26T19:23:08.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | 0x12 | null | 0x12/t5small-news_commentary-en-zh | 2 | null | transformers | 25,702 | Entry not found |
manueltonneau/bert-twitter-es-job-search | c8d078857fa1995dc2ff361aca12815d805ad6e8 | 2022-04-26T20:12:47.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-es-job-search | 2 | null | transformers | 25,703 | ---
language: es # <-- my language
widget:
- text: "Busco trabajo"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Search (1), else (0)
- country: MX
- language: Spanish
- architecture: BERT base
## Model description
This model is a version of `dccuchile/bert-base-spanish-wwm-cased` finetuned to recognize Spanish tweets where a user mentions that she is currently looking for a job. It was trained on Spanish tweets from users based in Mexico. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user is currently looking for a job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Spanish tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
rahulgkatre/DialoGPT-lisa | 896d058c3e1c922c53f3a62bd1a9f18013810c31 | 2022-04-27T04:06:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rahulgkatre | null | rahulgkatre/DialoGPT-lisa | 2 | null | transformers | 25,704 | Entry not found |
mriggs/gutenberg_wikisource_on_flaubert | 534c83552a7c8906aa0edf2e6758d83f7e0e48cd | 2022-04-27T05:26:16.000Z | [
"pytorch",
"flaubert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mriggs | null | mriggs/gutenberg_wikisource_on_flaubert | 2 | null | transformers | 25,705 | Entry not found |
manueltonneau/bert-twitter-pt-lost-job | 7d5476a2030c35bddb6e794454deab314ac4b98d | 2022-04-27T08:39:01.000Z | [
"pytorch",
"bert",
"text-classification",
"pt",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-pt-lost-job | 2 | null | transformers | 25,706 | ---
language: pt # <-- my language
widget:
- text: "hoje perdi o meu trabalho.."
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Lost Job (1), else (0)
- country: BR
- language: Portuguese
- architecture: BERT base
## Model description
This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets where a user mentions that she lost her job in the past month. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user recently lost her job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
Lilya/distilbert-base-uncased-finetuned-ner-final | 969f741721dd83b536eb1dec98ff682618e5d9bf | 2022-04-27T08:33:02.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Lilya | null | Lilya/distilbert-base-uncased-finetuned-ner-final | 2 | null | transformers | 25,707 | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-ner-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner-final
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Ghost1/marian-finetuned-kde4-en-to-fr3 | b5e0307c095926fe77a00a5a1cc3280777573a60 | 2022-04-27T11:09:24.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | Ghost1 | null | Ghost1/marian-finetuned-kde4-en-to-fr3 | 2 | null | transformers | 25,708 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 45.69063116587886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3274
- Bleu: 45.6906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kvnaraya/DialoGPT-small-dwight | 4bbc830651ce5ff98867b6366e9831490949c0bf | 2022-04-27T15:58:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | kvnaraya | null | kvnaraya/DialoGPT-small-dwight | 2 | null | transformers | 25,709 | Entry not found |
Diya-999/Bart12-12V6.0 | f1f2923415f951ebd829c05ad5198b11438077a9 | 2022-04-28T04:09:37.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | Diya-999 | null | Diya-999/Bart12-12V6.0 | 2 | null | transformers | 25,710 | ---
license: afl-3.0
---
|
nbroad/jplu-xlm-r-ner-40-lang | 7f7f0fe9bc946a9848611aff079f556387687216 | 2022-06-09T17:51:49.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | nbroad | null | nbroad/jplu-xlm-r-ner-40-lang | 2 | null | transformers | 25,711 | pytorch version of [jplu/tf-xlm-r-ner-40-lang](https://huggingface.co/jplu/tf-xlm-r-ner-40-lang)
|
PSW/random_sim_ins3_seed1 | 81deacb236b76a98c44aef2d2f327b99e67bc9f2 | 2022-04-27T15:39:05.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins3_seed1 | 2 | null | transformers | 25,712 | Entry not found |
PSW/random_sim_ins3_seed27 | af6591926a4dab40bd5625d460684c36df3473f1 | 2022-04-27T16:36:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_ins3_seed27 | 2 | null | transformers | 25,713 | Entry not found |
Bistolero/german_40k_final | 532ede45fdb882f7932169586e785e98a1c26706 | 2022-04-27T17:43:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/german_40k_final | 2 | null | transformers | 25,714 | Entry not found |
Bistolero/german_40k | 6f312d45c2aff9f5b60947f570eb47126c40fbf7 | 2022-04-27T18:35:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/german_40k | 2 | null | transformers | 25,715 | Entry not found |
PSW/random_sim_swap2_seed1 | 9d0f9f06965894e4b3fc6d78014a74a627db0449 | 2022-04-27T18:29:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/random_sim_swap2_seed1 | 2 | null | transformers | 25,716 | Entry not found |
bdickson/bert-base-uncased-finetuned-squad | e16ec28bf4e8550254f85fa1331a65be1f75eb3d | 2022-04-28T07:30:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | bdickson | null | bdickson/bert-base-uncased-finetuned-squad | 2 | null | transformers | 25,717 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1240
- eval_runtime: 262.7193
- eval_samples_per_second: 41.048
- eval_steps_per_second: 2.565
- epoch: 3.0
- step: 16599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
icity/distilbert-base-uncased-finetuned-imdb | b8abca7819ce5c56509d061a2904c9550c156e8e | 2022-05-18T15:29:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | icity | null | icity/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 25,718 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.414 | 1.0 | 10 | 4.7780 |
| 4.8623 | 2.0 | 20 | 4.7064 |
| 4.6726 | 3.0 | 30 | 4.5646 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Ghost1/mt5-small-finetuned-amazon-en-es | 32ec5d3da1d66e25cddee36aa2708b197ed57fcd | 2022-04-28T14:49:11.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | Ghost1 | null | Ghost1/mt5-small-finetuned-amazon-en-es | 2 | null | transformers | 25,719 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0282
- Rouge1: 17.629
- Rouge2: 8.5256
- Rougel: 17.1329
- Rougelsum: 17.1403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 6.6665 | 1.0 | 1209 | 3.2917 | 13.9446 | 5.4878 | 13.3696 | 13.3884 |
| 3.9091 | 2.0 | 2418 | 3.1575 | 16.5515 | 8.4045 | 15.734 | 15.8858 |
| 3.5987 | 3.0 | 3627 | 3.0803 | 18.4586 | 10.0134 | 17.6448 | 17.8592 |
| 3.4269 | 4.0 | 4836 | 3.0492 | 17.9493 | 8.9283 | 17.0803 | 17.1683 |
| 3.3213 | 5.0 | 6045 | 3.0466 | 18.124 | 8.967 | 17.4472 | 17.4445 |
| 3.2368 | 6.0 | 7254 | 3.0405 | 17.5527 | 8.4814 | 16.9722 | 17.0104 |
| 3.2039 | 7.0 | 8463 | 3.0335 | 17.5116 | 8.2969 | 17.006 | 17.0084 |
| 3.1834 | 8.0 | 9672 | 3.0282 | 17.629 | 8.5256 | 17.1329 | 17.1403 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
it5/it5-efficient-small-el32-repubblica-to-ilgiornale | 0ae8af833fa574596bd3fb2667b7e57b39138fea | 2022-04-29T14:46:50.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"transformers",
"italian",
"sequence-to-sequence",
"efficient",
"newspaper",
"ilgiornale",
"repubblica",
"style-transfer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | it5 | null | it5/it5-efficient-small-el32-repubblica-to-ilgiornale | 2 | null | transformers | 25,720 | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- efficient
- newspaper
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: it5-efficient-small-el32-repubblica-to-ilgiornale
results:
- task:
type: headline-style-transfer-repubblica-to-ilgiornale
name: "Headline style transfer (Repubblica to Il Giornale)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.269
name: "Test Rouge1"
- type: rouge2
value: 0.087
name: "Test Rouge2"
- type: rougeL
value: 0.235
name: "Test RougeL"
- type: bertscore
value: 0.395
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: headline-headline-consistency-classifier
value: 0.808
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.810
name: "Test Headline-Article Consistency Accuracy"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate a headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
r2g = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-repubblica-to-ilgiornale')
r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
princeton-nlp/efficient_mlm_m0.20 | 05388b19cae3a6bad03ce7c81ff5a89bc27d5205 | 2022-04-28T18:57:30.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"transformers",
"autotrain_compatible"
] | fill-mask | false | princeton-nlp | null | princeton-nlp/efficient_mlm_m0.20 | 2 | null | transformers | 25,721 | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
princeton-nlp/efficient_mlm_m0.70 | b906e1b03a7f8b92f0c2e84be2970ccf94ffeb49 | 2022-04-28T18:57:57.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"transformers",
"autotrain_compatible"
] | fill-mask | false | princeton-nlp | null | princeton-nlp/efficient_mlm_m0.70 | 2 | null | transformers | 25,722 | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
123tarunanand/roberta-base-finetuned | c9747f1ecf8d9dd0520d31636974644a3cf082c5 | 2022-04-28T15:32:00.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | 123tarunanand | null | 123tarunanand/roberta-base-finetuned | 2 | null | transformers | 25,723 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.88 | 1.0 | 8160 | 0.8129 |
| 0.6643 | 2.0 | 16320 | 0.8567 |
| 0.5096 | 3.0 | 24480 | 0.9325 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
chv5/t5-small-shuffled_take3-small | 1bc5094258f5225846bbaf9e8ee288fb491db76c | 2022-04-29T03:26:41.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chv5 | null | chv5/t5-small-shuffled_take3-small | 2 | null | transformers | 25,724 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-shuffled_take3-small
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 11.883
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-shuffled_take3-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4505
- Rouge1: 11.883
- Rouge2: 9.4784
- Rougel: 10.9978
- Rougelsum: 11.5961
- Gen Len: 18.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 0.5205 | 1.0 | 34008 | 0.4505 | 11.883 | 9.4784 | 10.9978 | 11.5961 | 18.9834 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bkh6722/wav2vec2-vorarlbergerisch | 8fba45435ffdb31a62ab80379f186037d4756959 | 2022-04-29T02:50:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bkh6722 | null | bkh6722/wav2vec2-vorarlbergerisch | 2 | null | transformers | 25,725 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-vorarlbergerisch
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-vorarlbergerisch
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9241
- Wer: 0.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 62
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.6837 | 3.83 | 100 | 3.7188 | 1.0 |
| 3.33 | 7.68 | 200 | 3.0620 | 1.0 |
| 2.9508 | 11.53 | 300 | 2.5915 | 1.0101 |
| 1.8954 | 15.38 | 400 | 1.6930 | 0.8243 |
| 1.231 | 19.23 | 500 | 1.7179 | 0.7551 |
| 0.9862 | 23.08 | 600 | 1.5237 | 0.6529 |
| 0.7353 | 26.91 | 700 | 1.5119 | 0.5921 |
| 0.5368 | 30.75 | 800 | 1.5011 | 0.5574 |
| 0.4448 | 34.6 | 900 | 1.5334 | 0.5363 |
| 0.3278 | 38.45 | 1000 | 1.7125 | 0.5144 |
| 0.2575 | 42.3 | 1100 | 1.6529 | 0.4958 |
| 0.1966 | 46.15 | 1200 | 1.7670 | 0.4848 |
| 0.1552 | 49.98 | 1300 | 1.7586 | 0.4620 |
| 0.1118 | 53.83 | 1400 | 1.7912 | 0.4417 |
| 0.0847 | 57.68 | 1500 | 1.8709 | 0.4443 |
| 0.0654 | 61.53 | 1600 | 1.9241 | 0.4358 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
megrisdal/distilbert-rater | 61f67b85438fc7bdeaa399551f6ab6d61369adff | 2022-05-24T17:33:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | megrisdal | null | megrisdal/distilbert-rater | 2 | null | transformers | 25,726 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-rater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rater
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
AntoDono/DialoGPT-Bopy-Normal | b7c4205d520b7e78349ddb22b06efc6ff9fa9654 | 2022-04-29T02:34:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AntoDono | null | AntoDono/DialoGPT-Bopy-Normal | 2 | null | transformers | 25,727 | Entry not found |
mpangrazzi/wonderflow_newsletter | 9ce815fee371a48a859db3f44fc65d09b241be03 | 2022-05-02T12:36:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | mpangrazzi | null | mpangrazzi/wonderflow_newsletter | 2 | 1 | transformers | 25,728 | ---
license: mit
---
A fancy weekly newsletter generator for Wonderflow Development team. NOTE: Use with caution.
To use this model, first load it:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mpangrazzi/wonderflow_newsletter")
model = AutoModelForCausalLM.from_pretrained("mpangrazzi/wonderflow_newsletter")
```
Then, use a `pipeline` to get predictions:
```python
from transformers import pipeline
text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
inputs = ["This week the development team"]
samples = text_generator(
inputs,
do_sample=True,
max_length=150,
num_return_sequences=5,
num_beans=5,
top_p=0.90,
temperature=1.3
)
outputs = [entry["generated_text"] for sample in samples for entry in sample]
for entry in outputs:
print(f"{entry}\n\n")
```
|
megrisdal/distilbert-base-uncased-finetuned | 4f56c2ee308f5e1b9c9439d720c163990059e28c | 2022-04-30T03:28:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | megrisdal | null | megrisdal/distilbert-base-uncased-finetuned | 2 | null | transformers | 25,729 | Entry not found |
fjavitor/gpt-2-spanish-cantaubot_1.0 | f7bd51190572351c807525ed42930f3fbe08e1ca | 2022-05-03T16:45:01.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | fjavitor | null | fjavitor/gpt-2-spanish-cantaubot_1.0 | 2 | null | transformers | 25,730 | ---
widget:
- text: "Dale alegría a tu cuerpo, Macarena"
---
|
dipteshkanojia/roberta-large-finetuned-ner | 30b11631f5602ed1b0339f2067ffdd02bcc7ad3d | 2022-04-30T21:40:41.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | dipteshkanojia | null | dipteshkanojia/roberta-large-finetuned-ner | 2 | null | transformers | 25,731 | Entry not found |
Muennighoff/t5-small-finetuned-xsum | fe9a7803b6cbecae89850fa66ca1feae7f356d12 | 2022-04-30T14:26:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Muennighoff | null | Muennighoff/t5-small-finetuned-xsum | 2 | null | transformers | 25,732 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4784
- Rouge1: 28.2881
- Rouge2: 7.6834
- Rougel: 22.2163
- Rougelsum: 22.219
- Gen Len: 18.8292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7184 | 1.0 | 12753 | 2.4784 | 28.2881 | 7.6834 | 22.2163 | 22.219 | 18.8292 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Barkavi/totto-bert-score-pretrained-10K-steps | 4f7ba3869c40ca8ad1e331236c3519fa7a953394 | 2022-04-30T19:25:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Barkavi | null | Barkavi/totto-bert-score-pretrained-10K-steps | 2 | null | transformers | 25,733 | Entry not found |
sherry7144/wav2vec2-base-timit-demo-colab0 | bf7f0b3c8b5b96595ee9f80c2194633147974d22 | 2022-04-30T20:04:12.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sherry7144 | null | sherry7144/wav2vec2-base-timit-demo-colab0 | 2 | null | transformers | 25,734 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0395
- Wer: 0.5635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3976 | 13.89 | 500 | 0.8616 | 0.5968 |
| 0.2637 | 27.78 | 1000 | 0.9973 | 0.5826 |
| 0.1794 | 41.67 | 1500 | 1.0395 | 0.5635 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
doddle124578/wav2vec2-base-timit-demo-colab-3 | b2e778fc0ed9530b85085bcb96ef1b7e3c6c7570 | 2022-04-30T18:32:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | doddle124578 | null | doddle124578/wav2vec2-base-timit-demo-colab-3 | 2 | null | transformers | 25,735 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6622
- Wer: 0.5082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2195 | 8.77 | 500 | 0.9187 | 0.6635 |
| 0.5996 | 17.54 | 1000 | 0.6569 | 0.5347 |
| 0.2855 | 26.32 | 1500 | 0.6622 | 0.5082 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali221000262/wav2vec2-base-timit-ali-hasan-colab | f9186cbfb51fb682cace2a3d8343b57c542b9ea0 | 2022-04-30T17:36:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali221000262 | null | ali221000262/wav2vec2-base-timit-ali-hasan-colab | 2 | null | transformers | 25,736 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-ali-hasan-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-ali-hasan-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2471
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.5485 | 13.89 | 500 | 3.2471 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali221000262/wav2vec2-base-timit-ali-hasan-colab-EX2 | ab532a1336f03268cd2b49c6a3903fcd90c8d18b | 2022-04-30T19:02:59.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali221000262 | null | ali221000262/wav2vec2-base-timit-ali-hasan-colab-EX2 | 2 | null | transformers | 25,737 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-ali-hasan-colab-EX2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-ali-hasan-colab-EX2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5087
- Wer: 0.4458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1956 | 13.89 | 500 | 0.5087 | 0.4458 |
| 0.1946 | 27.78 | 1000 | 0.5087 | 0.4458 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ParanoidAndroid/bert-finetuned-squad | 819f3fd8f684a4caa67cca888aa28b854a298a73 | 2022-04-30T18:29:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | ParanoidAndroid | null | ParanoidAndroid/bert-finetuned-squad | 2 | null | transformers | 25,738 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
moaiz237/wav2vec2-base-timit-moaiz_exp2_new | 82cf079eeaa30974662b71758d2abbf2da8441b0 | 2022-04-30T20:03:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | moaiz237 | null | moaiz237/wav2vec2-base-timit-moaiz_exp2_new | 2 | null | transformers | 25,739 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-moaiz_exp2_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_exp2_new
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6849
- Wer: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1266 | 13.89 | 500 | 1.0233 | 0.7034 |
| 0.5928 | 27.78 | 1000 | 0.6849 | 0.5396 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab1 | c894eb2045688390377bf9b2a5e2405be980ca7d | 2022-05-01T05:22:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab1 | 2 | null | transformers | 25,740 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1904
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 5.0877 | 1.42 | 500 | 3.2909 | 1.0 |
| 3.1333 | 2.85 | 1000 | 3.2624 | 1.0 |
| 3.1335 | 4.27 | 1500 | 3.2121 | 1.0 |
| 3.1294 | 5.7 | 2000 | 3.2047 | 1.0 |
| 3.1307 | 7.12 | 2500 | 3.2020 | 1.0 |
| 3.1279 | 8.55 | 3000 | 3.1978 | 1.0 |
| 3.1296 | 9.97 | 3500 | 3.2015 | 1.0 |
| 3.1273 | 11.4 | 4000 | 3.1983 | 1.0 |
| 3.1273 | 12.82 | 4500 | 3.2258 | 1.0 |
| 3.1274 | 14.25 | 5000 | 3.2151 | 1.0 |
| 3.1256 | 15.67 | 5500 | 3.2105 | 1.0 |
| 3.1302 | 17.09 | 6000 | 3.2018 | 1.0 |
| 3.1285 | 18.52 | 6500 | 3.2006 | 1.0 |
| 3.1251 | 19.94 | 7000 | 3.1858 | 1.0 |
| 3.1283 | 21.37 | 7500 | 3.1829 | 1.0 |
| 3.1267 | 22.79 | 8000 | 3.1773 | 1.0 |
| 3.1283 | 24.22 | 8500 | 3.1857 | 1.0 |
| 3.1253 | 25.64 | 9000 | 3.1847 | 1.0 |
| 3.1251 | 27.07 | 9500 | 3.1832 | 1.0 |
| 3.1245 | 28.49 | 10000 | 3.1869 | 1.0 |
| 3.1225 | 29.91 | 10500 | 3.1904 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
fgaim/tiroberta-geezswitch | b1c45aa97d12aacb1a91acd984ab0cff30d2c9e1 | 2022-05-13T18:27:38.000Z | [
"pytorch",
"roberta",
"text-classification",
"ti",
"transformers",
"geezlab",
"license:cc-by-4.0",
"model-index"
] | text-classification | false | fgaim | null | fgaim/tiroberta-geezswitch | 2 | null | transformers | 25,741 | ---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
- text: "ወአመ ሳብዕት ዕለት ቦዘወፅአ እምውስተ ሕዝብ ከመ ያስተጋብእ ወኢረከበ።"
- text: "እሊ እግል ኖሱ አሳስ ተጠውር ወዐቦት ክምሰልቱ ሸክ ኢወትውዴ።"
- text: "ኣኩኽር ፡ ልሽክክ ናው ጀረቢነዅስክ ክሙኑኽር ክራውል ሕበርሲድኖ ገረሰነኵ።"
- text: "ነገ ለግማሽ ፍፃሜ ያለፉትን አሳውቀንና አስመርጠናችሁ እንሸልማለን።"
tags:
- geezlab
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: geezswitch-tiroberta
results: []
license: cc-by-4.0
---
# TiRoBERTa-GeezSwitch
This model is a fine-tuned version of [fgaim/tiroberta-base](https://huggingface.co/fgaim/tiroberta-base) on the [GeezSwitch](https://github.com/fgaim/geezswitch-data) dataset.
It achieves the following results on the test set:
- F1: 0.9948
- Recall: 0.9948
- Precision: 0.9948
- Accuracy: 0.9948
- Loss: 0.0222
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- seed: 42
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
### Citation
If you use this model or the GeezSwitch model in your research, please cite as follows:
```markdown
@inproceedings{fgaim2022geezswitch,
title={GeezSwitch: Language Identification in Typologically Related Low-resourced East African Languages},
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
year={2022}
}
```
|
fgaim/tielectra-geezswitch | 4f71499b90174207e2845303c1bb77434e8d67ab | 2022-05-14T06:20:23.000Z | [
"pytorch",
"electra",
"text-classification",
"ti",
"transformers",
"geezlab",
"license:cc-by-4.0",
"model-index"
] | text-classification | false | fgaim | null | fgaim/tielectra-geezswitch | 2 | null | transformers | 25,742 | ---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
- text: "ወአመ ሳብዕት ዕለት ቦዘወፅአ እምውስተ ሕዝብ ከመ ያስተጋብእ ወኢረከበ።"
- text: "እሊ እግል ኖሱ አሳስ ተጠውር ወዐቦት ክምሰልቱ ሸክ ኢወትውዴ።"
- text: "ኣኩኽር ፡ ልሽክክ ናው ጀረቢነዅስክ ክሙኑኽር ክራውል ሕበርሲድኖ ገረሰነኵ።"
- text: "ነገ ለግማሽ ፍፃሜ ያለፉትን አሳውቀንና አስመርጠናችሁ እንሸልማለን።"
tags:
- geezlab
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: geezswitch-tielectra
results: []
license: cc-by-4.0
---
# TiELECTRA-GeezSwitch
This model is a fine-tuned version of [fgaim/tielectra-small](https://huggingface.co/fgaim/tielectra-small) on the [GeezSwitch](https://github.com/fgaim/geezswitch-data) dataset.
It achieves the following results on the test set:
- F1: 0.9844
- Recall: 0.9844
- Precision: 0.9845
- Accuracy: 0.9844
- Loss: 0.2190
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- seed: 42
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
### Citation
If you use this model or the GeezSwitch model in your research, please cite as follows:
```markdown
@inproceedings{fgaim2022geezswitch,
title={GeezSwitch: Language Identification in Typologically Related Low-resourced East African Languages},
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
year={2022}
}
```
|
mriggs/tgb_99_100 | 9f30b05aea53701c74195a30d38a6d2d4f634389 | 2022-05-01T06:41:53.000Z | [
"pytorch",
"flaubert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mriggs | null | mriggs/tgb_99_100 | 2 | null | transformers | 25,743 | Entry not found |
scasutt/wav2vec2-large-xlsr-53_full_final_train_first_half | 3dca490618257a1682b23396247acecd18881180 | 2022-05-01T22:20:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_full_final_train_first_half | 2 | null | transformers | 25,744 | Entry not found |
Siyam/SKYLy | 2da92c3545073da4fcccdd174fa564030dc14860 | 2022-05-01T16:02:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Siyam | null | Siyam/SKYLy | 2 | null | transformers | 25,745 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: SKYLy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SKYLy
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7645
- Wer: 0.4083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4215 | 4.26 | 400 | 1.6323 | 0.9857 |
| 0.5716 | 8.51 | 800 | 0.6679 | 0.5107 |
| 0.1721 | 12.77 | 1200 | 0.6935 | 0.4632 |
| 0.1063 | 17.02 | 1600 | 0.7533 | 0.4432 |
| 0.0785 | 21.28 | 2000 | 0.7208 | 0.4255 |
| 0.0608 | 25.53 | 2400 | 0.7481 | 0.4117 |
| 0.0493 | 29.79 | 2800 | 0.7645 | 0.4083 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
huggingtweets/umakomptonrose | 12735cef195dec72ac56168c627ac8fb24024d26 | 2022-05-01T10:41:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/umakomptonrose | 2 | null | transformers | 25,746 | ---
language: en
thumbnail: http://www.huggingtweets.com/umakomptonrose/1651401701205/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509685524361105414/-iZ0C4dW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Uma Kompton</div>
<div style="text-align: center; font-size: 14px;">@umakomptonrose</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Uma Kompton.
| Data | Uma Kompton |
| --- | --- |
| Tweets downloaded | 184 |
| Retweets | 9 |
| Short tweets | 22 |
| Tweets kept | 153 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3q3vjpe4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @umakomptonrose's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37a8dws9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37a8dws9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/umakomptonrose')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/a_ergt-sausifaktai-suuiluap | fac5edf5fb0112a16a8361cee0af5f42ad5940b7 | 2022-05-01T11:05:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/a_ergt-sausifaktai-suuiluap | 2 | null | transformers | 25,747 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1512730099614953472/dyaBioOx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/703268070962372608/sWc1Y_Ch_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/783999503711997952/BHnn3C1Z_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Æ𝚐𝚛𝚝 & Sausi Faktai & Pαulius</div>
<div style="text-align: center; font-size: 14px;">@a_ergt-sausifaktai-suuiluap</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Æ𝚐𝚛𝚝 & Sausi Faktai & Pαulius.
| Data | Æ𝚐𝚛𝚝 | Sausi Faktai | Pαulius |
| --- | --- | --- | --- |
| Tweets downloaded | 3241 | 3194 | 3192 |
| Retweets | 299 | 19 | 811 |
| Short tweets | 977 | 16 | 484 |
| Tweets kept | 1965 | 3159 | 1897 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bn9w1ob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @a_ergt-sausifaktai-suuiluap's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3txmfh51) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3txmfh51/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/a_ergt-sausifaktai-suuiluap')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hdmt/aligner-en-vi | b042e334f705d89545d6889c7e026813ef09672d | 2022-05-01T13:26:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hdmt | null | hdmt/aligner-en-vi | 2 | null | transformers | 25,748 | test |
hassnain/wav2vec2-base-timit-demo-colab647 | e50ea77814c02a55d00910c800d4acbf5afc21cc | 2022-05-01T15:54:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab647 | 2 | null | transformers | 25,749 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab647
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab647
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5534
- Wer: 0.4799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2072 | 7.04 | 500 | 3.7757 | 1.0 |
| 1.2053 | 14.08 | 1000 | 0.6128 | 0.5648 |
| 0.3922 | 21.13 | 1500 | 0.5547 | 0.5035 |
| 0.2157 | 28.17 | 2000 | 0.5534 | 0.4799 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
h4d35/dummy-model | 8479cee3c6a5323ea2327bac0abfcca489ebe9c3 | 2022-05-01T18:50:16.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | h4d35 | null | h4d35/dummy-model | 2 | null | transformers | 25,750 | Entry not found |
charityking2358/taglish-electra-1k | 7dcc5285c2b996e6b3a2bd34bb038c60641acb9a | 2022-05-01T19:10:51.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-1k | 2 | null | transformers | 25,751 | Entry not found |
Worldman/pega_70_articles | 24cd9b784201ab1594b37e4f18810891e1b16305 | 2022-06-03T13:13:37.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Worldman | null | Worldman/pega_70_articles | 2 | null | transformers | 25,752 | ---
tags:
- generated_from_trainer
model-index:
- name: pega_70_articles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pega_70_articles
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Ghost1/bert-finetuned-squad1 | fff760ff15500a85c35c21da6b7a0d56b90be223 | 2022-05-02T02:28:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Ghost1 | null | Ghost1/bert-finetuned-squad1 | 2 | 0 | transformers | 25,753 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/mixed_sim3_seed1 | 7fd87554092e912b0b7fe917716e47e91fb85531 | 2022-05-02T02:10:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim3_seed1 | 2 | null | transformers | 25,754 | Entry not found |
PSW/mixed_sim3_seed27 | bfff88e5634dea3985cfd8629322192908a5496d | 2022-05-02T02:54:03.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim3_seed27 | 2 | null | transformers | 25,755 | Entry not found |
neonkitchen/wav2vec2-tcrs | 9a761d49f3c5387affc7dc24911b423ecf9ca7b3 | 2022-05-04T08:19:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | neonkitchen | null | neonkitchen/wav2vec2-tcrs | 2 | null | transformers | 25,756 | Entry not found |
maesneako/gpt2-fr_orfeo-cid-paco-cheese_e3 | d2031570f38265d62a97be397f9963d95170e3eb | 2022-05-02T19:59:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | maesneako | null | maesneako/gpt2-fr_orfeo-cid-paco-cheese_e3 | 2 | null | transformers | 25,757 | Entry not found |
Willow/DialoGPT-medium-willow | 9bdd71f002c9c8ea8d8d38e930a3680ce04653c0 | 2022-05-02T23:07:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Willow | null | Willow/DialoGPT-medium-willow | 2 | null | transformers | 25,758 | ---
tags:
- conversational
---
# Willow DialoGPT Model |
veronica320/MPE_bert-l | b647182c59e1d22a35d1cf74fe3859e8f3565abb | 2022-05-03T02:15:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | veronica320 | null | veronica320/MPE_bert-l | 2 | null | transformers | 25,759 | Entry not found |
veronica320/MPE_roberta-l | b28d8ff3b00f7ded288286482af78866d68a7e7a | 2022-05-03T02:23:06.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | veronica320 | null | veronica320/MPE_roberta-l | 2 | null | transformers | 25,760 | Entry not found |
veronica320/ADEPT_bert-l | 56fe3e38a632efb0d523c821e2301586f5708904 | 2022-05-03T02:24:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | veronica320 | null | veronica320/ADEPT_bert-l | 2 | null | transformers | 25,761 | Entry not found |
huggingtweets/lonelythey18 | 4cee3938f4210aeaf49c2a77964afbe1ae1188bb | 2022-05-03T05:01:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/lonelythey18 | 2 | null | transformers | 25,762 | ---
language: en
thumbnail: http://www.huggingtweets.com/lonelythey18/1651554075248/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488171735174238211/4Y7YAhJG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cara</div>
<div style="text-align: center; font-size: 14px;">@lonelythey18</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cara.
| Data | Cara |
| --- | --- |
| Tweets downloaded | 2640 |
| Retweets | 301 |
| Short tweets | 500 |
| Tweets kept | 1839 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3l0t3r5o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lonelythey18's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1znlhqjr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1znlhqjr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lonelythey18')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kravchenko/uk-mt5-base | 9331e4e6e170df5e9c09ed2997bdf489e89558f9 | 2022-06-12T14:57:59.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"uk",
"en",
"transformers",
"t5",
"autotrain_compatible"
] | text2text-generation | false | kravchenko | null | kravchenko/uk-mt5-base | 2 | 2 | transformers | 25,763 | ---
language:
- uk
- en
tags:
- t5
---
The aim is to compress the mT5-base model to leave only the Ukrainian language and some basic English.
Reproduced the similar result (but with another language) from [this](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) medium article.
Results:
- 582M params -> 244M params (58%)
- 250K tokens -> 30K tokens
- 2.2GB size model -> 0.95GB size model |
wvangils/NL_BERT_michelin_finetuned | 0b7db35f51649a3c66b00e76412d3b63cb0616f3 | 2022-05-06T07:53:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | wvangils | null | wvangils/NL_BERT_michelin_finetuned | 2 | 1 | transformers | 25,764 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: NL_BERT_michelin_finetuned
results: []
widget:
- text: "Wat een geweldige ervaring. Wij gebruikte de lunch bij de Librije. 10 gangen met in overleg hierbij gekozen wijnen. Alles klopt. De aandacht, de timing, prachtige gerechtjes. En wat een smaaksensaties! Bediening met humor. Altijd daar wanneer je ze nodig hebt, maar nooit overdreven aanwezig."
example_title: "Michelin restaurant"
- text: "Mooie locatie, aardige medewerkers. Maaltijdsalade helaas teleurstellend, zeer kleine portie voor 13,80. Jammer."
example_title: "Mooie locatie, matig eten"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NL_BERT_michelin_finetuned
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on a [Dutch restaurant reviews dataset](https://huggingface.co/datasets/cmotions/NL_restaurant_reviews). Provide Dutch review text to the API on the right and receive a score that indicates whether this restaurant is eligible for a Michelin star ;)
It achieves the following results on the evaluation set:
- Loss: 0.0637
- Accuracy: 0.9836
- Recall: 0.5486
- Precision: 0.7914
- F1: 0.6480
- Mse: 0.0164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | Mse |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.1043 | 1.0 | 3647 | 0.0961 | 0.9792 | 0.3566 | 0.7606 | 0.4856 | 0.0208 |
| 0.0799 | 2.0 | 7294 | 0.0797 | 0.9803 | 0.4364 | 0.7415 | 0.5495 | 0.0197 |
| 0.0589 | 3.0 | 10941 | 0.0637 | 0.9836 | 0.5486 | 0.7914 | 0.6480 | 0.0164 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
masakhane/m2m100_418M_hau_en_rel_ft | 990b4cd481628eefb49a73c481afe6403cec55f3 | 2022-05-03T13:55:17.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_hau_en_rel_ft | 2 | null | transformers | 25,765 | Entry not found |
PSW/min_senttrm_del_seed27 | f59213da081e02d80f636014213848d27955e365 | 2022-05-03T14:34:17.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min_senttrm_del_seed27 | 2 | null | transformers | 25,766 | Entry not found |
laituan245/molt5-small-smiles2caption | 639e8279ee5e47a40ec949675cf996f173175d84 | 2022-05-03T18:07:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.11817",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | laituan245 | null | laituan245/molt5-small-smiles2caption | 2 | null | transformers | 25,767 | ---
license: apache-2.0
---
This model can be used to generate an input caption from a SMILES string.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-small-smiles2caption", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small-smiles2caption')
input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
theojolliffe/bart-large-cnn-finetuned-roundup-8 | 4f19c59df9a9cd1f3bfc864bc50e9889226a03f3 | 2022-05-03T18:12:19.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-8 | 2 | null | transformers | 25,768 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-8
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4519
- Rouge1: 49.5671
- Rouge2: 27.0118
- Rougel: 30.8538
- Rougelsum: 45.5503
- Gen Len: 141.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3159 | 48.5275 | 28.0817 | 30.6646 | 45.5024 | 142.0 |
| No log | 2.0 | 264 | 1.2377 | 47.0791 | 27.4386 | 28.9458 | 44.1536 | 142.0 |
| No log | 3.0 | 396 | 1.2474 | 49.3567 | 29.5904 | 30.8029 | 46.6083 | 142.0 |
| 0.9623 | 4.0 | 528 | 1.2914 | 47.8795 | 27.0611 | 29.8538 | 44.4494 | 142.0 |
| 0.9623 | 5.0 | 660 | 1.2982 | 49.9921 | 28.4839 | 31.5688 | 46.9734 | 142.0 |
| 0.9623 | 6.0 | 792 | 1.3521 | 46.7269 | 25.8672 | 29.7325 | 43.8279 | 142.0 |
| 0.9623 | 7.0 | 924 | 1.4102 | 47.4995 | 26.0066 | 29.4342 | 44.1102 | 141.8 |
| 0.3734 | 8.0 | 1056 | 1.4519 | 49.5671 | 27.0118 | 30.8538 | 45.5503 | 141.75 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/max_senttrm_del_seed42 | 49b3e75cbe3a1c043829518f791004579af9adf3 | 2022-05-03T17:26:01.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_senttrm_del_seed42 | 2 | null | transformers | 25,769 | Entry not found |
lilitket/20220503-174039 | a5b766407fdd1722f91435b8e1cf10767bc53298 | 2022-05-04T14:12:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-174039 | 2 | null | transformers | 25,770 | Entry not found |
stevemobs/bert-finetuned-squad-pytorch | 40652388e7a6ec3768e000d8a28fd9070f9f7d4e | 2022-05-03T20:17:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/bert-finetuned-squad-pytorch | 2 | null | transformers | 25,771 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad-pytorch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad-pytorch
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
simonnedved/bert-seg-v1.5 | 76b3614880659ec0282c5a80589146c92017fdc7 | 2022-05-03T18:18:05.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | simonnedved | null | simonnedved/bert-seg-v1.5 | 2 | null | transformers | 25,772 | ---
license: apache-2.0
---
|
SebastianS/distilbert-base-uncased-finetuned-imdb | f1348fe9e709c9781fef7f2b8cb88da3d525dee3 | 2022-05-03T20:42:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | SebastianS | null | SebastianS/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 25,773 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0122
- eval_runtime: 27.9861
- eval_samples_per_second: 35.732
- eval_steps_per_second: 0.572
- epoch: 2.13
- step: 334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Cuprum/GPT2-Cyp | 15a758c50765a191104c627ee438085c9cc01654 | 2022-05-03T20:03:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:other"
] | text-generation | false | Cuprum | null | Cuprum/GPT2-Cyp | 2 | null | transformers | 25,774 | ---
license: other
---
|
PSW/min_senttrm_ins_seed1 | 1fcc4ac85bbb7770d28a21a585c5d92c73cc62aa | 2022-05-03T20:16:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min_senttrm_ins_seed1 | 2 | null | transformers | 25,775 | Entry not found |
PSW/max_senttrm_ins_seed27 | 56931597ba7d44cf4f5ecd40bbeeaa3bff00cb55 | 2022-05-03T23:08:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_senttrm_ins_seed27 | 2 | null | transformers | 25,776 | Entry not found |
ml4pubmed/scibert-scivocab-cased_pub_section | 0c6e643c067cda1cfe7d751643fe946c125aae7b | 2022-05-04T01:15:49.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:pubmed",
"transformers"
] | text-classification | false | ml4pubmed | null | ml4pubmed/scibert-scivocab-cased_pub_section | 2 | null | transformers | 25,777 | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "Many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "BACKGROUND example"
- text: "A total of 192 MI patients and 140 control persons were included."
example_title: "METHODS example"
- text: "MI patients had 18 % higher plasma levels of MAp44 (IQR 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "RESULTS example"
- text: "The finding that a brief CB group intervention delivered by real-world providers significantly reduced MDD onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "CONCLUSIONS example"
- text: "In order to understand and update the prevalence of myopia in Taiwan, a nationwide survey was performed in 1995."
example_title: "OBJECTIVE example"
---
# scibert-scivocab-cased_pub_section
- original model file name: textclassifer_scibert_scivocab_cased_pubmed_20k
- This is a fine-tuned checkpoint of `allenai/scibert_scivocab_cased` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_metrics
- date_run: Apr-26-2022_t-13
- huggingface_tag: allenai/scibert_scivocab_cased
- test_set: [{'test_accuracy': 0.8313589096069336, 'test_matthewscorrcoef': 0.7736952900886536, 'test_f1score': 0.8317078948020935, 'test_cross_entropy': 0.5242752432823181}]
### training_parameters
- NUM_EPOCHS: 12
- BATCH_SIZE: 32
- MAX_INPUT_LENGTH: 256
- TRAIN_FP16: True
- TRAIN_STRATEGY: freeze
- LR_SCHEDULE: reducelronplateau
- LR_INITIAL: 0.001
- WEIGHT_DECAY: 0.05
- UNFREEZE_EPOCH: 4
- hf_tag: allenai/scibert_scivocab_cased
- lowercased_input: False
- input_text_colname: description
- target_cls_colname: target
- num_classes: 5
- model_shortname: scibert_scivocab_cased
|
PSW/max_senttrm_ins_seed42 | 2050f24e70b5da35f090e2cc83c7514acd78a2fa | 2022-05-03T23:51:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_senttrm_ins_seed42 | 2 | null | transformers | 25,778 | Entry not found |
creynier/wav2vec2-base-swbd-turn-eos-long_short_utt_removed_3percent | 80745fa9bee438c33240872d2ac9827636ab4cda | 2022-05-05T10:55:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short_utt_removed_3percent | 2 | null | transformers | 25,779 | Entry not found |
neelan-elucidate-ai/wav2vec2-tcrs | 32ea87c391058054224c189288f3986215d8d1b8 | 2022-05-07T16:50:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | neelan-elucidate-ai | null | neelan-elucidate-ai/wav2vec2-tcrs | 2 | null | transformers | 25,780 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-tcrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-tcrs
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9550
- Wer: 1.0657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 13.6613 | 3.38 | 500 | 3.2415 | 1.0 |
| 2.9524 | 6.76 | 1000 | 3.0199 | 1.0 |
| 2.9425 | 10.14 | 1500 | 3.0673 | 1.0 |
| 2.9387 | 13.51 | 2000 | 3.0151 | 1.0 |
| 2.9384 | 16.89 | 2500 | 3.0320 | 1.0 |
| 2.929 | 20.27 | 3000 | 2.9691 | 1.0 |
| 2.9194 | 23.65 | 3500 | 2.9596 | 1.0 |
| 2.9079 | 27.03 | 4000 | 2.9279 | 1.0 |
| 2.8957 | 30.41 | 4500 | 2.9647 | 1.0 |
| 2.8385 | 33.78 | 5000 | 2.8114 | 1.0193 |
| 2.6546 | 37.16 | 5500 | 2.6744 | 1.0983 |
| 2.5866 | 40.54 | 6000 | 2.6192 | 1.1071 |
| 2.5475 | 43.92 | 6500 | 2.5777 | 1.0950 |
| 2.5177 | 47.3 | 7000 | 2.5845 | 1.1220 |
| 2.482 | 50.68 | 7500 | 2.5730 | 1.1264 |
| 2.4343 | 54.05 | 8000 | 2.5722 | 1.0955 |
| 2.3754 | 57.43 | 8500 | 2.5781 | 1.1353 |
| 2.3055 | 60.81 | 9000 | 2.6177 | 1.0972 |
| 2.2446 | 64.19 | 9500 | 2.6351 | 1.1027 |
| 2.1625 | 67.57 | 10000 | 2.6924 | 1.0756 |
| 2.1078 | 70.95 | 10500 | 2.6817 | 1.0795 |
| 2.0366 | 74.32 | 11000 | 2.7629 | 1.0657 |
| 1.9899 | 77.7 | 11500 | 2.7972 | 1.0845 |
| 1.9309 | 81.08 | 12000 | 2.8450 | 1.0734 |
| 1.8861 | 84.46 | 12500 | 2.8703 | 1.0668 |
| 1.8437 | 87.84 | 13000 | 2.9308 | 1.0917 |
| 1.8192 | 91.22 | 13500 | 2.9298 | 1.0701 |
| 1.7952 | 94.59 | 14000 | 2.9488 | 1.0685 |
| 1.7745 | 97.97 | 14500 | 2.9550 | 1.0657 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
PSW/mixed_sim4_seed1 | ca4acdd014766c6d37036cf9c623488db0d4489a | 2022-05-04T09:15:42.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim4_seed1 | 2 | null | transformers | 25,781 | Entry not found |
iis2009002/xlm-roberta-base-finetuned-panx-it | 7a5eaeceec887686a97400d6cb204095026f9347 | 2022-05-12T07:07:41.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | iis2009002 | null | iis2009002/xlm-roberta-base-finetuned-panx-it | 2 | null | transformers | 25,782 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ncthuan/xlm-l-uetqa | a5f14f366f98cd9831f461a707090dc9475fbc3f | 2022-05-04T14:39:06.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ncthuan | null | ncthuan/xlm-l-uetqa | 2 | null | transformers | 25,783 | Entry not found |
anuragshas/wav2vec2-xls-r-300m-ur-cv9-with-lm | 0fee38baf834b841d27923ac9c09676652963237 | 2022-05-10T16:51:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_9_0",
"transformers",
"mozilla-foundation/common_voice_9_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-ur-cv9-with-lm | 2 | 1 | transformers | 25,784 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_9_0
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Urdu
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_9_0
name: Common Voice 9
args: ur
metrics:
- type: wer
value: 23.750
name: Test WER
- name: Test CER
type: cer
value: 8.310
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4147
- Wer: 0.3172
- Cer: 0.1050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5108
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.2894 | 7.83 | 400 | 3.1501 | 1.0 | 1.0 |
| 1.8586 | 15.68 | 800 | 0.8871 | 0.6721 | 0.2402 |
| 1.3431 | 23.52 | 1200 | 0.5813 | 0.5502 | 0.1939 |
| 1.2052 | 31.37 | 1600 | 0.4956 | 0.4788 | 0.1665 |
| 1.1097 | 39.21 | 2000 | 0.4447 | 0.4143 | 0.1397 |
| 1.0528 | 47.06 | 2400 | 0.4439 | 0.3961 | 0.1333 |
| 0.9939 | 54.89 | 2800 | 0.4348 | 0.4014 | 0.1379 |
| 0.9441 | 62.74 | 3200 | 0.4236 | 0.3653 | 0.1223 |
| 0.913 | 70.58 | 3600 | 0.4309 | 0.3475 | 0.1157 |
| 0.8678 | 78.43 | 4000 | 0.4270 | 0.3337 | 0.1110 |
| 0.8414 | 86.27 | 4400 | 0.4158 | 0.3220 | 0.1070 |
| 0.817 | 94.12 | 4800 | 0.4185 | 0.3231 | 0.1072 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
Danastos/dpr-ctx_encoder_el_custom | c9b6eff80ee8d4bf3d36df333019b30172390c72 | 2022-05-04T15:58:48.000Z | [
"pytorch",
"dpr",
"transformers"
] | null | false | Danastos | null | Danastos/dpr-ctx_encoder_el_custom | 2 | null | transformers | 25,785 | Entry not found |
laituan245/t5-v1_1-base-smiles2caption | b10fe1ac49becd243c539e43a2aa9e80898e7b70 | 2022-05-05T00:29:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | laituan245 | null | laituan245/t5-v1_1-base-smiles2caption | 2 | null | transformers | 25,786 | ---
license: apache-2.0
---
|
laituan245/t5-v1_1-small-caption2smiles-ft-from-pretrained-zinc | d01ebb0c4b3d3a3d96e88ba2ed1c9b5f07314440 | 2022-05-05T02:32:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | laituan245 | null | laituan245/t5-v1_1-small-caption2smiles-ft-from-pretrained-zinc | 2 | null | transformers | 25,787 | Entry not found |
PSW/low_resource_percent1_maxsimins_seed42 | 4edf9c417c25a6f07d9d4b6d7ad51a28854b62ab | 2022-05-05T06:40:52.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_maxsimins_seed42 | 2 | null | transformers | 25,788 | Entry not found |
PSW/low_resource_percent1_minmaxswap_seed1 | 788c6236d0e310e5c38bc61b82a1ba03cfd10f1f | 2022-05-05T06:51:45.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_minmaxswap_seed1 | 2 | null | transformers | 25,789 | Entry not found |
PSW/low_resource_percent1_minmaxswap_seed42 | dcc758091d1c8ce08717832cc4686e2eba5b9893 | 2022-05-05T07:13:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_minmaxswap_seed42 | 2 | null | transformers | 25,790 | Entry not found |
chrisvinsen/xlsr-wav2vec2-base-commonvoice-demo-colab-6 | 4dfcce54f2aeca13efd68e7c4ea00ecd8505ff4c | 2022-05-05T07:51:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/xlsr-wav2vec2-base-commonvoice-demo-colab-6 | 2 | null | transformers | 25,791 | Entry not found |
mtluczek80/VATestNew | 7d8dba9e5316cb55c361b8e353fd6446249a9f2e | 2022-05-05T07:53:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:other",
"autotrain_compatible"
] | fill-mask | false | mtluczek80 | null | mtluczek80/VATestNew | 2 | null | transformers | 25,792 | ---
license: other
---
|
PSW/low_resource_percent1_minsimdel_seed42 | 14bffff62cc888925880fcea4e95a9de413a3505 | 2022-05-05T07:46:37.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_minsimdel_seed42 | 2 | null | transformers | 25,793 | Entry not found |
catofnull/my-awesome-model | 62bc2ded9663faa51c2b56db6da1019be3165181 | 2022-05-05T07:41:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | catofnull | null | catofnull/my-awesome-model | 2 | null | transformers | 25,794 | Entry not found |
PSW/low_resource_percent1_randomdel_seed42 | 6da7bb1aa4b5844d47294524e975bd6b9c970829 | 2022-05-05T08:18:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_randomdel_seed42 | 2 | null | transformers | 25,795 | Entry not found |
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-03 | 578602470920d8cc1a7128d23034fc113c20b906 | 2022-05-05T15:44:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:filipino_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Khalsuu | null | Khalsuu/english-filipino-wav2vec2-l-xls-r-test-03 | 2 | null | transformers | 25,796 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- filipino_voice
model-index:
- name: english-filipino-wav2vec2-l-xls-r-test-03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-filipino-wav2vec2-l-xls-r-test-03
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Wer: 0.3676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3398 | 2.09 | 400 | 0.5733 | 0.6166 |
| 0.5087 | 4.19 | 800 | 0.5210 | 0.4775 |
| 0.344 | 6.28 | 1200 | 0.5284 | 0.5008 |
| 0.2745 | 8.38 | 1600 | 0.5195 | 0.4457 |
| 0.2153 | 10.47 | 2000 | 0.5820 | 0.4668 |
| 0.1797 | 12.57 | 2400 | 0.4915 | 0.4432 |
| 0.1513 | 14.66 | 2800 | 0.6316 | 0.4513 |
| 0.1355 | 16.75 | 3200 | 0.5328 | 0.4070 |
| 0.1204 | 18.85 | 3600 | 0.5800 | 0.4405 |
| 0.1062 | 20.94 | 4000 | 0.6887 | 0.4532 |
| 0.0931 | 23.04 | 4400 | 0.6184 | 0.4152 |
| 0.0821 | 25.13 | 4800 | 0.7413 | 0.4461 |
| 0.0733 | 27.23 | 5200 | 0.7160 | 0.4549 |
| 0.071 | 29.32 | 5600 | 0.7001 | 0.4048 |
| 0.0577 | 31.41 | 6000 | 0.7839 | 0.4309 |
| 0.051 | 33.51 | 6400 | 0.7764 | 0.4128 |
| 0.046 | 35.6 | 6800 | 0.6753 | 0.3875 |
| 0.0384 | 37.7 | 7200 | 0.7106 | 0.3856 |
| 0.0359 | 39.79 | 7600 | 0.6932 | 0.3676 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
PSW/low_resource_percent1_randomins_seed42 | d2015cad3ff0c4864f8f0177ec37415b792ae96e | 2022-05-05T08:51:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_randomins_seed42 | 2 | null | transformers | 25,797 | Entry not found |
PSW/low_resource_percent1_randomswap_seed27 | c5b03201591aae7aa3ea14cef91ae049a087565c | 2022-05-05T09:12:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/low_resource_percent1_randomswap_seed27 | 2 | null | transformers | 25,798 | Entry not found |
CarlCochet/trajectory-transformer-halfcheetah-expert-v2 | 89941d7f01a17c51d8bdeb8a25b21bf7f6439cae | 2022-05-12T17:00:41.000Z | [
"pytorch",
"trajectory_transformer",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | CarlCochet | null | CarlCochet/trajectory-transformer-halfcheetah-expert-v2 | 2 | null | transformers | 25,799 | ---
license: mit
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.