modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner | 247c6edf286841fa9c7476be35c6bba510571ff1 | 2021-05-20T09:15:08.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner | 14 | 1 | transformers | 9,900 | Entry not found |
yechen/bert-base-chinese-jinyong | 6c3ab99a0f88fb30447dc0611ad04547a8ebd4fc | 2021-05-20T09:20:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | yechen | null | yechen/bert-base-chinese-jinyong | 14 | null | transformers | 9,901 | ---
language: zh
---
|
inovex/multi2convai-logistics-de-bert | a10d6b5a0cdcfec7cfdd0af294791f2b556e7e17 | 2022-03-01T08:53:44.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"license:mit"
]
| text-classification | false | inovex | null | inovex/multi2convai-logistics-de-bert | 14 | null | transformers | 9,902 | ---
tags:
- text-classification
widget:
- text: "Wo kann ich das Paket ablegen?"
license: mit
language: de
---
# Multi2ConvAI-Logistics: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
ghadeermobasher/BC5CDR-Disease-Modified_biobert-v1.1 | c2a32f7edfb3e46b7058802f5a333b7ac102b86a | 2022-02-25T18:25:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Disease-Modified_biobert-v1.1 | 14 | null | transformers | 9,903 | Entry not found |
bookbot/distil-wav2vec2-adult-child-cls-52m | 9d08da904f0bc62228591b5486fc7579a39406ca | 2022-02-26T13:48:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"en",
"arxiv:2006.11477",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | bookbot | null | bookbot/distil-wav2vec2-adult-child-cls-52m | 14 | null | transformers | 9,904 | ---
language: en
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distil-wav2vec2-adult-child-cls-52m
results: []
---
# DistilWav2Vec2 Adult/Child Speech Classifier 52M
DistilWav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a distilled version of [wav2vec2-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-adult-child-cls) on a private adult/child speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------------- | ------- | ----------- | ----------------------------------------- |
| `distil-wav2vec2-adult-child-cls-52m` | 52M | wav2vec 2.0 | Adult/Child Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| --------------------------------- | ------ | -------- | ------ |
| Adult/Child Speech Classification | 0.1301 | 96.03% | 0.9639 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 32
- `eval_batch_size`: 32
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 128
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `num_epochs`: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.212 | 1.0 | 96 | 0.1561 | 0.9561 | 0.9596 |
| 0.1523 | 2.0 | 192 | 0.1408 | 0.9575 | 0.9616 |
| 0.0844 | 3.0 | 288 | 0.1301 | 0.9603 | 0.9639 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
DistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle.
## Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
bullmount/hseBert-it-cased | 17c2194c6c181634fba88ab1dad03e81e66ef5f7 | 2022-02-27T18:08:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"it",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | bullmount | null | bullmount/hseBert-it-cased | 14 | null | transformers | 9,905 | ---
language: it
license: mit
widget:
- text: "È stata pubblicata la [MASK] di conversione del D.L. 24 dicembre 2021 n. 221 ."
- text: "La legge fornisce l’esatta [MASK] di Green pass base."
- text: "Il datore di lavoro organizza e predispone i posti di lavoro di cui all'articolo 173, in [MASK] ai requisiti minimi di cui all'allegato XXXIV."
- text: "Le principali novità riguardano la quarantena precauzionale e il [MASK] di autosorveglianza."
---
# hseBERT
**hseBert-it-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences.
# Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "bullmount/hseBert-it-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
``` |
Kiran146/distilbert-base-uncased-finetuned-emotion | 1b2749fa693e6ea2505de92cde014cf983883e4a | 2022-02-28T17:30:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Kiran146 | null | Kiran146/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,906 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9227765339978083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Accuracy: 0.9225
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.84 | 1.0 | 250 | 0.3133 | 0.909 | 0.9070 |
| 0.2459 | 2.0 | 500 | 0.2224 | 0.9225 | 0.9228 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ghadeermobasher/BC5CDR-Chem2-Modified_BiomedNLP-PubMedBERT-base-uncased-abstract | 924e0949a0c8de63609e1199c399d6380f7242a7 | 2022-03-01T06:46:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem2-Modified_BiomedNLP-PubMedBERT-base-uncased-abstract | 14 | null | transformers | 9,907 | Entry not found |
davanstrien/convnext_manuscript_iiif | 6c2da8478fafd75d3b12e13badfeb6b1a1306b2f | 2022-03-08T02:21:52.000Z | [
"pytorch",
"convnext",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | davanstrien | null | davanstrien/convnext_manuscript_iiif | 14 | null | transformers | 9,908 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- f1
model-index:
- name: convnext_manuscript_iiif
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext_manuscript_iiif
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the davanstrien/iiif_manuscripts_label_ge_50 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5856
- F1: 0.0037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.5753 | 1.0 | 2038 | 6.4121 | 0.0016 |
| 5.9865 | 2.0 | 4076 | 5.9466 | 0.0021 |
| 5.6521 | 3.0 | 6114 | 5.7645 | 0.0029 |
| 5.3123 | 4.0 | 8152 | 5.6890 | 0.0033 |
| 5.0337 | 5.0 | 10190 | 5.6692 | 0.0034 |
| 4.743 | 6.0 | 12228 | 5.5856 | 0.0037 |
| 4.4387 | 7.0 | 14266 | 5.5969 | 0.0042 |
| 4.1422 | 8.0 | 16304 | 5.6711 | 0.0043 |
| 3.8372 | 9.0 | 18342 | 5.6761 | 0.0044 |
| 3.5244 | 10.0 | 20380 | 5.8469 | 0.0042 |
| 3.2321 | 11.0 | 22418 | 5.8774 | 0.0045 |
| 2.9004 | 12.0 | 24456 | 6.1186 | 0.0047 |
| 2.5937 | 13.0 | 26494 | 6.2398 | 0.0046 |
| 2.2983 | 14.0 | 28532 | 6.3732 | 0.0049 |
| 2.0611 | 15.0 | 30570 | 6.5024 | 0.0045 |
| 1.8153 | 16.0 | 32608 | 6.6585 | 0.0047 |
| 1.6075 | 17.0 | 34646 | 6.8333 | 0.0043 |
| 1.4342 | 18.0 | 36684 | 6.9529 | 0.0044 |
| 1.2614 | 19.0 | 38722 | 7.1129 | 0.0046 |
| 1.1463 | 20.0 | 40760 | 7.1977 | 0.0039 |
| 1.0387 | 21.0 | 42798 | 7.2700 | 0.0044 |
| 0.9635 | 22.0 | 44836 | 7.3375 | 0.0040 |
| 0.8872 | 23.0 | 46874 | 7.4003 | 0.0039 |
| 0.8156 | 24.0 | 48912 | 7.4884 | 0.0039 |
| 0.7544 | 25.0 | 50950 | 7.4764 | 0.0039 |
| 0.6893 | 26.0 | 52988 | 7.5153 | 0.0042 |
| 0.6767 | 27.0 | 55026 | 7.5427 | 0.0043 |
| 0.6098 | 28.0 | 57064 | 7.5547 | 0.0042 |
| 0.5871 | 29.0 | 59102 | 7.5533 | 0.0041 |
| 0.5696 | 30.0 | 61140 | 7.5595 | 0.0041 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Ayou/chinese_mobile_bert | 34618c0214ac41f7e13d5ffc89ad634e16afb25a | 2022-03-04T12:49:12.000Z | [
"pytorch",
"mobilebert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Ayou | null | Ayou/chinese_mobile_bert | 14 | 1 | transformers | 9,909 | ---
license: apache-2.0
---
在2.5亿的中文语料上,进行mobie_bert进行预训练。在单卡-A100下迭代100万 steps,训练15天。 |
LukasStankevicius/ByT5-Lithuanian-gec-100h | 3d2fc303c10482409f1b63adc030c5cefd1fd071 | 2022-07-28T05:55:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"lt",
"transformers",
"byt5",
"Lithuanian",
"grammatical error correction",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | LukasStankevicius | null | LukasStankevicius/ByT5-Lithuanian-gec-100h | 14 | null | transformers | 9,910 | ---
language: lt
tags:
- byt5
- Lithuanian
- grammatical error correction
widget:
- text: 'Sveiki pardodu tvarkyngą "Audi" firmos automobylį. Kątik iš Amerikės. Viena savininka prižiurietas ir mylietas Automobylis. Dar turu patobulintą „Mersedes“ su automatinia greičių pavara už 4000 evrų (iš Amerikės). Taippat tvarkingas.'
license: apache-2.0
---
This is *google/byt5-small* transformer model trained on Lithuanian text for ~100 hours.
It was created during the work [**Towards Lithuanian Grammatical Error Correction**](https://link.springer.com/chapter/10.1007/978-3-031-09076-9_44), which was presented at [11th Computer Science On-line Conference 2022](https://csoc.openpublish.eu/).
The model is yet in its infancy (we are planning to train 100x longer in the future). Nevertheless, it clearly shows the possibilities and capabilities.
## Usage
Given the following corrupted text obtained from [https://www.diktantas.lt/pasitikrink-lietuviu-kalbos-zinias]:
```
text = 'Sveiki pardodu tvarkyngą "Audi" firmos automobylį. Kątik iš Amerikės. Viena savininka prižiurietas ir mylietas Automobylis. Dar turu patobulintą „Mersedes“ su automatinia greičių pavara už 4000 evrų (iš Amerikės). Taippat tvarkingas.'
```
The correction can be obtained by:
```python
from transformers import pipeline
name= "LukasStankevicius/ByT5-Lithuanian-gec-100h"
my_pipeline = pipeline(task="text2text-generation", model=name, framework="pt")
corrected_text = my_pipeline(text)[0]["generated_text"]
print(corrected_text)
```
Output from the above would be:
Sveiki parduodu tvarkingą „Audi“ firmos automobilį. Ką tik iš Amerikės. Viena savininkas prižiūrintas ir mylimas automobilis. Dar turiu patobulintą „Mersedes“ su automatine greičių pavara už 4000 eurų (iš Amerikės). Taip pat tvarkingas.
More information can be found in the accompanying [GitHub repository](https://github.com/LukasStankevicius/Towards-Lithuanian-Grammatical-Error-Correction)
If you find our work useful, please cite the following paper:
``` latex
@InProceedings{10.1007/978-3-031-09076-9_44,
author="Stankevi{\v{c}}ius, Lukas
and Luko{\v{s}}evi{\v{c}}ius, Mantas",
editor="Silhavy, Radek",
title="Towards Lithuanian Grammatical Error Correction",
booktitle="Artificial Intelligence Trends in Systems",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="490--503",
abstract="Everyone wants to write beautiful and correct text, yet the lack of language skills, experience, or hasty typing can result in errors. By employing the recent advances in transformer architectures, we construct a grammatical error correction model for Lithuanian, the language rich in archaic features. We compare subword and byte-level approaches and share our best trained model, achieving F{\$}{\$}{\_}{\{}0.5{\}}=0.92{\$}{\$}0.5=0.92, and accompanying code, in an online open-source repository.",
isbn="978-3-031-09076-9"
}
``` |
saattrupdan/wav2vec2-xls-r-300m-ftspeech | 73d80f53cfa83e395949f51673a58e07ac433679 | 2022-03-21T17:30:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"da",
"dataset:ftspeech",
"transformers",
"license:other",
"model-index"
]
| automatic-speech-recognition | false | saattrupdan | null | saattrupdan/wav2vec2-xls-r-300m-ftspeech | 14 | null | transformers | 9,911 | ---
language:
- da
license: other
tasks:
- automatic-speech-recognition
datasets:
- ftspeech
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-ftspeech
results:
- task:
type: automatic-speech-recognition
dataset:
type: mozilla-foundation/common_voice_8_0
args: da
name: Danish Common Voice 8.0
metrics:
- type: wer
value: 17.91
- task:
type: automatic-speech-recognition
dataset:
type: Alvenir/alvenir_asr_da_eval
name: Alvenir ASR test dataset
metrics:
- type: wer
value: 13.84
---
# XLS-R-300m-FTSpeech
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [FTSpeech dataset](https://ftspeech.github.io/), being a dataset of 1,800 hours of transcribed speeches from the Danish parliament.
## Performance
The model achieves the following WER scores (lower is better):
| **Dataset** | **WER without LM** | **WER with 5-gram LM** |
| :---: | ---: | ---: |
| [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 20.48 | 17.91 |
| [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 15.46 | 13.84 |
## License
The use of this model needs to adhere to [this license from the Danish Parliament](https://www.ft.dk/da/aktuelt/tv-fra-folketinget/deling-og-rettigheder). |
gdario/distilbert-base-uncased-finetuned-emotion | eba29ac185b44875bc2f5a9a53db5f02f5c60c51 | 2022-06-25T09:24:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | gdario | null | gdario/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,912 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8955
- name: F1
type: f1
value: 0.8918003951340884
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3662
- Accuracy: 0.8955
- F1: 0.8918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5675 | 0.8265 | 0.8067 |
| 0.7565 | 2.0 | 250 | 0.3662 | 0.8955 | 0.8918 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
gayanin/bart-paraphrasing-mlm | 3ff29962918d3886b04c734943a314f915f6b853 | 2022-03-07T21:40:56.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | gayanin | null | gayanin/bart-paraphrasing-mlm | 14 | null | transformers | 9,913 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrasing-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrasing-mlm
This model is a fine-tuned version of [gayanin/bart-paraphrase-pubmed-1.1](https://huggingface.co/gayanin/bart-paraphrase-pubmed-1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5510
- Rouge2 Precision: 0.7148
- Rouge2 Recall: 0.5223
- Rouge2 Fmeasure: 0.5866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6799 | 1.0 | 13833 | 0.5982 | 0.7016 | 0.5122 | 0.5756 |
| 0.5894 | 2.0 | 27666 | 0.5663 | 0.7093 | 0.5193 | 0.583 |
| 0.5329 | 3.0 | 41499 | 0.5540 | 0.7129 | 0.5212 | 0.5853 |
| 0.4953 | 4.0 | 55332 | 0.5510 | 0.7148 | 0.5223 | 0.5866 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
MikhailGalperin/distilbert-base-uncased-finetuned-ner | 5b7d5feb69b6cc5bd95fcfadffd0bb806b4c1c96 | 2022-03-08T06:49:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | MikhailGalperin | null | MikhailGalperin/distilbert-base-uncased-finetuned-ner | 14 | null | transformers | 9,914 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
davanstrien/dit-base-manuscripts | 96f015a5b13b48267d031b93fb6b0cde838d9f24 | 2022-03-09T10:08:42.000Z | [
"pytorch",
"tensorboard",
"deit",
"transformers",
"masked-image-modeling",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| null | false | davanstrien | null | davanstrien/dit-base-manuscripts | 14 | null | transformers | 9,915 | ---
license: apache-2.0
tags:
- masked-image-modeling
- generated_from_trainer
model-index:
- name: dit-base-manuscripts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-manuscripts
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the davanstrien/iiif_manuscripts_label_ge_50 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1333
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1396 | 1.0 | 32 | 1.1261 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
mrm8488/spanish-TinyBERT-betito-finetuned-xnli-es | 6613ab5adf4570fe7ed9291fe5aafcf0f1de7b8a | 2022-03-09T07:29:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:xnli",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | mrm8488 | null | mrm8488/spanish-TinyBERT-betito-finetuned-xnli-es | 14 | null | transformers | 9,916 | ---
tags:
- generated_from_trainer
datasets:
- xnli
metrics:
- accuracy
model-index:
- name: spanish-TinyBERT-betito-finetuned-xnli-es
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: xnli
type: xnli
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.7475049900199601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-TinyBERT-betito-finetuned-xnli-es
This model is a fine-tuned version of [mrm8488/spanish-TinyBERT-betito](https://huggingface.co/mrm8488/spanish-TinyBERT-betito) on the xnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7104
- Accuracy: 0.7475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.50838112218154e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7191 | 1.0 | 49399 | 0.6829 | 0.7112 |
| 0.6323 | 2.0 | 98798 | 0.6527 | 0.7305 |
| 0.5727 | 3.0 | 148197 | 0.6531 | 0.7465 |
| 0.4964 | 4.0 | 197596 | 0.7079 | 0.7427 |
| 0.4929 | 5.0 | 246995 | 0.7104 | 0.7475 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
zuppif/maskformer-swin-small-ade | f4097ad31123b84e35d4f9e977f746fa703c12ab | 2022-07-06T07:24:51.000Z | [
"pytorch",
"maskformer",
"transformers",
"object-detection",
"COCO",
"YOLO",
"Darknet",
"model-index"
]
| object-detection | false | zuppif | null | zuppif/maskformer-swin-small-ade | 14 | null | transformers | 9,917 | ---
tags:
- object-detection
- COCO
- YOLO
- Darknet
model-index:
- name: moon
results:
- metrics:
- type: mAP
value: 1
name: mAP
task:
type: object-detection
name: object-detection
dataset:
name: COCO
type: COCO
---
|
MrAnderson/bert-base-1024-full-trivia-copied-embeddings | dcafbe148e665eb18159d9248bfa71cb0d42037e | 2022-03-11T22:05:33.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | MrAnderson | null | MrAnderson/bert-base-1024-full-trivia-copied-embeddings | 14 | null | transformers | 9,918 | Entry not found |
StivenLancheros/Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en | f65c87461ee480975a97674f1a47b4f43adca6cf | 2022-03-12T11:40:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en | 14 | null | transformers | 9,919 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1811
- Precision: 0.8555
- Recall: 0.8539
- F1: 0.8547
- Accuracy: 0.9706
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in Spanish and English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.052 | 1.0 | 1360 | 0.1413 | 0.8300 | 0.8442 | 0.8370 | 0.9677 |
| 0.0199 | 2.0 | 2720 | 0.1673 | 0.8461 | 0.8458 | 0.8459 | 0.9689 |
| 0.011 | 3.0 | 4080 | 0.1647 | 0.8588 | 0.8528 | 0.8558 | 0.9704 |
| 0.0031 | 4.0 | 5440 | 0.1811 | 0.8555 | 0.8539 | 0.8547 | 0.9706 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
negfir/distilbert-base-uncased-finetuned-cola | a7cddf4e81a9a44c899f81b98f5072b090df106d | 2022-03-24T00:39:00.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_keras_callback",
"model-index"
]
| text-classification | false | negfir | null | negfir/distilbert-base-uncased-finetuned-cola | 14 | null | transformers | 9,920 | ---
tags:
- generated_from_keras_callback
model-index:
- name: negfir/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# negfir/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [negfir/uncased_L-12_H-128_A-2](https://huggingface.co/negfir/uncased_L-12_H-128_A-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6077
- Validation Loss: 0.6185
- Train Matthews Correlation: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.6116 | 0.6187 | 0.0 | 0 |
| 0.6070 | 0.6190 | 0.0 | 1 |
| 0.6077 | 0.6185 | 0.0 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cambridgeltl/guardian_news_bert-base-uncased | 7fa82b67d7680ca4026c56a730ba77ea48b3483b | 2022-03-15T17:15:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cambridgeltl | null | cambridgeltl/guardian_news_bert-base-uncased | 14 | null | transformers | 9,921 | Entry not found |
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES | d7dcf9c3b374c4893ec6f0a50529bd70d0087638 | 2022-03-17T14:49:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES | 14 | null | transformers | 9,922 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Precision: 0.8276
- Recall: 0.8411
- F1: 0.8343
- Accuracy: 0.9676
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0549 | 1.0 | 4078 | 0.1673 | 0.8056 | 0.8112 | 0.8084 | 0.9640 |
| 0.0233 | 2.0 | 8156 | 0.1733 | 0.8321 | 0.8244 | 0.8283 | 0.9662 |
| 0.0101 | 3.0 | 12234 | 0.1972 | 0.8336 | 0.8391 | 0.8363 | 0.9678 |
| 0.0036 | 4.0 | 16312 | 0.2251 | 0.8276 | 0.8411 | 0.8343 | 0.9676 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Amo/gpt-neo-125m-mlp-micro | 30c914b11203da0db8a7404a4d947ba04bdc77b2 | 2022-03-19T08:51:57.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | Amo | null | Amo/gpt-neo-125m-mlp-micro | 14 | null | transformers | 9,923 | Entry not found |
vinaykudari/distilGPT-ft-eli5 | a58e57900a73910d982becbaeb2d284896b1bab7 | 2022-03-19T17:24:50.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | vinaykudari | null | vinaykudari/distilGPT-ft-eli5 | 14 | null | transformers | 9,924 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilGPT-ft-eli5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilGPT-ft-eli5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 281 | 5.8277 |
| 5.7427 | 2.0 | 562 | 5.7525 |
| 5.7427 | 3.0 | 843 | 5.7016 |
| 5.5614 | 4.0 | 1124 | 5.6593 |
| 5.5614 | 5.0 | 1405 | 5.6273 |
| 5.4408 | 6.0 | 1686 | 5.6029 |
| 5.4408 | 7.0 | 1967 | 5.5855 |
| 5.3522 | 8.0 | 2248 | 5.5739 |
| 5.2948 | 9.0 | 2529 | 5.5670 |
| 5.2948 | 10.0 | 2810 | 5.5643 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ahmeddbahaa/t5-small-finetuned-xlsum-en | 99e2f7d583ede30df3c82c8680fbf17655051779 | 2022-03-22T19:51:49.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/t5-small-finetuned-xlsum-en | 14 | 1 | transformers | 9,925 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xlsum-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: english
metrics:
- name: Rouge1
type: rouge
value: 23.7508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xlsum-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6629
- Rouge1: 23.7508
- Rouge2: 5.5427
- Rougel: 18.6777
- Rougelsum: 18.652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.0789 | 1.0 | 1010 | 2.6881 | 22.6824 | 4.4735 | 17.6707 | 17.5485 |
| 2.9223 | 2.0 | 2020 | 2.6629 | 23.7508 | 5.5427 | 18.6777 | 18.652 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Roshan777/finetuning-sentiment-model-300-samples | 3a0f54c98fbeeb35054d634e073d200254ac494e | 2022-03-26T12:54:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Roshan777 | null | Roshan777/finetuning-sentiment-model-300-samples | 14 | null | transformers | 9,926 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-300-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.6833333333333333
- name: F1
type: f1
value: 0.6153846153846154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-300-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Accuracy: 0.6833
- F1: 0.6154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
vumichien/token-classification-bigbird-roberta-base-random | b34badb20694d2c9849c63b060f7a95e9296bbcd | 2022-03-25T03:23:28.000Z | [
"pytorch",
"big_bird",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | vumichien | null | vumichien/token-classification-bigbird-roberta-base-random | 14 | null | transformers | 9,927 | Entry not found |
alefiury/wav2vec2-large-xlsr-53-coraa-brazilian-portuguese-gain-normalization-sna | 3c219fa6ffd827bba2568eb19a1f9207c2c4b79e | 2022-04-05T16:59:13.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:CORAA",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:voxforge",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | alefiury | null | alefiury/wav2vec2-large-xlsr-53-coraa-brazilian-portuguese-gain-normalization-sna | 14 | 1 | transformers | 9,928 | ---
language: pt
datasets:
- CORAA
- common_voice
- mls
- cetuc
- voxforge
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Alef Iury XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test CORAA WER
type: wer
value: 24.89%
---
# Wav2vec 2.0 trained with CORAA Portuguese Dataset and Open Portuguese Datasets
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following datasets:
- [CORAA dataset](https://github.com/nilc-nlp/CORAA)
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz).
- [Multilingual Librispeech (MLS)](http://www.openslr.org/94/).
- [VoxForge](http://www.voxforge.org/).
- [Common Voice 6.1](https://commonvoice.mozilla.org/pt).
## Repository
The repository that implements the model to be trained and tested is avaible [here](https://github.com/alefiury/SE-R_2022_Challenge_Wav2vec2). |
manu/lilt-camembert-base | 3e05da955eba893d4646c97cd7c44d1421626461 | 2022-03-30T14:49:30.000Z | [
"pytorch",
"liltrobertalike",
"fill-mask",
"fr",
"dataset:iit-cdip",
"transformers",
"token-classification",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | manu | null | manu/lilt-camembert-base | 14 | null | transformers | 9,929 | ---
language:
- fr
tags:
- token-classification
- fill-mask
license: mit
datasets:
- iit-cdip
---
This model is the combined camembert-base model, with the pretrained lilt checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding".
Original repository: https://github.com/jpWang/LiLT
To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel).
They can also be preloaded with the AutoConfig/model factories as such:
```python
from transformers import AutoModelForTokenClassification, AutoConfig
from path_to_custom_classes import (
LiLTRobertaLikeConfig,
LiLTRobertaLikeForRelationExtraction,
LiLTRobertaLikeForTokenClassification,
LiLTRobertaLikeModel
)
def patch_transformers():
AutoConfig.register("liltrobertalike", LiLTRobertaLikeConfig)
AutoModel.register(LiLTRobertaLikeConfig, LiLTRobertaLikeModel)
AutoModelForTokenClassification.register(LiLTRobertaLikeConfig, LiLTRobertaLikeForTokenClassification)
# etc...
```
To load the model, it is then possible to use:
```python
# patch_transformers() must have been executed beforehand
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = AutoModel.from_pretrained("manu/lilt-camembert-base")
model = AutoModelForTokenClassification.from_pretrained("manu/lilt-camembert-base") # to be fine-tuned on a token classification task
``` |
bdunnette/derbynames-aitextgen | 186c941688e3f500996911570b36a2c9c7baff41 | 2022-04-04T19:27:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:cc-by-nc-sa-4.0"
]
| text-generation | false | bdunnette | null | bdunnette/derbynames-aitextgen | 14 | null | transformers | 9,930 | ---
license: cc-by-nc-sa-4.0
---
|
abdusahmbzuai/ft-tatoeba-ar-en | 851da8800d4877252a22e919c91105c38ca70288 | 2022-04-10T15:34:36.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"dataset:open_subtitles",
"transformers",
"translation",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| translation | false | abdusahmbzuai | null | abdusahmbzuai/ft-tatoeba-ar-en | 14 | null | transformers | 9,931 | ---
tags:
- translation
- generated_from_trainer
datasets:
- open_subtitles
model-index:
- name: ft-tatoeba-ar-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-tatoeba-ar-en
This model was trained from scratch on the open_subtitles dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
CAiRE/wav2vec2-large-xlsr-53-cantonese | 000930f74d91d06cd347218c8413b9756a3be239 | 2022-06-09T10:55:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"zh-HK",
"yue",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | CAiRE | null | CAiRE/wav2vec2-large-xlsr-53-cantonese | 14 | 2 | transformers | 9,932 | ---
language:
- zh-HK
- yue
datasets:
- common_voice
metrics:
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2-Large-XLSR-53-Cantonese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice zh-HK
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: [18.55%]
---
# Wav2Vec2-Large-XLSR-53-Cantonese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Cantonese using the [Common Voice Corpus 8.0](https://commonvoice.mozilla.org/en/datasets).
When using this model, make sure that your speech input is sampled at 16kHz.
The Common Voice's validated `train` and `dev` were used for training.
The script used for training can be found at [https://github.com/holylovenia/wav2vec2-pretraining](https://github.com/holylovenia/wav2vec2-pretraining).
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "zh-HK", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
model = Wav2Vec2ForCTC.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the zh-HK test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "zh-HK", split="test")
wer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
model = Wav2Vec2ForCTC.from_pretrained("CAiRE/wav2vec2-large-xlsr-53-cantonese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: CER: 18.55 %
## Citation
If you use our code/model, please cite us:
```
@inproceedings{lovenia2022ascend,
title={ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation},
author={Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
}
``` |
obokkkk/kobigbird-bert-base-finetuned-klue | 1f6083dfc29c4dc6b371b8a58b9850f0934eaeae | 2022-04-12T10:07:16.000Z | [
"pytorch",
"big_bird",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | obokkkk | null | obokkkk/kobigbird-bert-base-finetuned-klue | 14 | null | transformers | 9,933 | ---
tags:
- generated_from_trainer
model-index:
- name: kobigbird-bert-base-finetuned-klue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobigbird-bert-base-finetuned-klue
This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7262 | 0.17 | 500 | 3.1922 |
| 2.2239 | 0.35 | 1000 | 1.5877 |
| 1.602 | 0.52 | 1500 | 1.4144 |
| 1.3619 | 0.69 | 2000 | 1.2172 |
| 1.2611 | 0.86 | 2500 | 1.0703 |
| 1.1354 | 1.04 | 3000 | 1.0719 |
| 0.9851 | 1.21 | 3500 | 1.0052 |
| 0.9205 | 1.38 | 4000 | 1.0223 |
| 0.8753 | 1.55 | 4500 | 0.9671 |
| 0.8751 | 1.73 | 5000 | 1.0368 |
| 0.8535 | 1.9 | 5500 | 0.9146 |
| 0.7376 | 2.07 | 6000 | 1.0462 |
| 0.6256 | 2.24 | 6500 | 1.0606 |
| 0.6041 | 2.42 | 7000 | 1.1533 |
| 0.6403 | 2.59 | 7500 | 1.0871 |
| 0.6208 | 2.76 | 8000 | 1.0743 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
westphal-jan/roberta-base-mnli | efdcedd0102c605cae84cd03b6619a5784e38791 | 2022-04-13T13:22:05.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | westphal-jan | null | westphal-jan/roberta-base-mnli | 14 | null | transformers | 9,934 | Entry not found |
omar47/wav2vec2-large-xls-r-300m-urdu-colab-cv8 | 5779938327a2fbaf4067857f341c859e431a2b23 | 2022-04-20T02:57:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | omar47 | null | omar47/wav2vec2-large-xls-r-300m-urdu-colab-cv8 | 14 | null | transformers | 9,935 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-urdu-colab-cv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu-colab-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4651
- Wer: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 20.3271 | 1.27 | 32 | 20.3487 | 1.0 |
| 11.0206 | 2.55 | 64 | 7.7343 | 1.0 |
| 5.8023 | 3.82 | 96 | 5.4188 | 1.0 |
| 4.5872 | 5.12 | 128 | 4.1428 | 1.0 |
| 3.6691 | 6.39 | 160 | 3.4557 | 1.0 |
| 3.3143 | 7.67 | 192 | 3.2663 | 1.0 |
| 3.1689 | 8.94 | 224 | 3.1022 | 0.9982 |
| 3.1472 | 10.24 | 256 | 3.0544 | 0.9993 |
| 3.1091 | 11.51 | 288 | 3.0327 | 0.9978 |
| 3.0437 | 12.78 | 320 | 3.0288 | 1.0 |
| 2.9981 | 14.08 | 352 | 2.8645 | 1.0 |
| 2.5244 | 15.35 | 384 | 2.0238 | 0.9686 |
| 1.4962 | 16.63 | 416 | 1.5885 | 0.9118 |
| 1.0138 | 17.9 | 448 | 1.3656 | 0.8155 |
| 0.7655 | 19.2 | 480 | 1.4592 | 0.8125 |
| 0.6267 | 20.47 | 512 | 1.4170 | 0.7867 |
| 0.5127 | 21.75 | 544 | 1.3200 | 0.7716 |
| 0.4422 | 23.04 | 576 | 1.4082 | 0.7727 |
| 0.3482 | 24.31 | 608 | 1.3932 | 0.7432 |
| 0.3128 | 25.59 | 640 | 1.4059 | 0.7432 |
| 0.2762 | 26.86 | 672 | 1.4689 | 0.7336 |
| 0.2451 | 28.16 | 704 | 1.4318 | 0.7207 |
| 0.2104 | 29.43 | 736 | 1.4304 | 0.7399 |
| 0.1858 | 30.71 | 768 | 1.4586 | 0.7225 |
| 0.1779 | 31.98 | 800 | 1.4948 | 0.7284 |
| 0.1546 | 33.27 | 832 | 1.4960 | 0.7173 |
| 0.1457 | 34.55 | 864 | 1.4949 | 0.7077 |
| 0.1333 | 35.82 | 896 | 1.4656 | 0.7085 |
| 0.1212 | 37.12 | 928 | 1.5061 | 0.7033 |
| 0.1162 | 38.39 | 960 | 1.4653 | 0.7055 |
| 0.1043 | 39.67 | 992 | 1.4651 | 0.7 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Manishkalra/finetuning-sentiment-model-3000-samples | 6c2209e78818feded6f7617ff4394342bf0bb0e3 | 2022-04-14T11:04:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Manishkalra | null | Manishkalra/finetuning-sentiment-model-3000-samples | 14 | null | transformers | 9,936 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8769716088328076
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3186
- Accuracy: 0.87
- F1: 0.8770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
stevems1/distilroberta-base-SmithsModel | 58c4bf9998c1628775cf3ae0e8a1ecadd8184b93 | 2022-04-17T11:08:15.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | stevems1 | null | stevems1/distilroberta-base-SmithsModel | 14 | null | transformers | 9,937 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-SmithsModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-SmithsModel
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6589 | 1.0 | 830 | 2.8652 |
| 2.8362 | 2.0 | 1660 | 2.4309 |
| 2.6291 | 3.0 | 2490 | 2.2826 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ChrisZeng/electra-large-discriminator-nli-efl-tweeteval | 3da15e375bcc1a4383a487d270380b1d3b1cbc58 | 2022-04-20T02:05:43.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ChrisZeng | null | ChrisZeng/electra-large-discriminator-nli-efl-tweeteval | 14 | null | transformers | 9,938 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electra-large-discriminator-nli-efl-tweeteval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-large-discriminator-nli-efl-tweeteval
This model is a fine-tuned version of [ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli](https://huggingface.co/ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.7943
- F1: 0.7872
- Loss: 0.3004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|
| 0.4384 | 1.0 | 163 | 0.7444 | 0.7308 | 0.3962 |
| 0.3447 | 2.0 | 326 | 0.7659 | 0.7552 | 0.3410 |
| 0.3057 | 3.0 | 489 | 0.7750 | 0.7688 | 0.3234 |
| 0.287 | 4.0 | 652 | 0.7857 | 0.7779 | 0.3069 |
| 0.2742 | 5.0 | 815 | 0.7887 | 0.7822 | 0.3030 |
| 0.2676 | 6.0 | 978 | 0.7939 | 0.7851 | 0.2982 |
| 0.2585 | 7.0 | 1141 | 0.7909 | 0.7822 | 0.3002 |
| 0.2526 | 8.0 | 1304 | 0.7943 | 0.7876 | 0.3052 |
| 0.2479 | 9.0 | 1467 | 0.7939 | 0.7847 | 0.2997 |
| 0.2451 | 10.0 | 1630 | 0.7956 | 0.7873 | 0.3014 |
| 0.2397 | 11.0 | 1793 | 0.7943 | 0.7872 | 0.3004 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.0.dev20220417
- Datasets 2.1.0
- Tokenizers 0.10.3
|
zafercavdar/distilbert-base-turkish-cased-emotion | d10ad71ee7fca4c4c3462c968a839a388542a859 | 2022-04-19T22:03:18.000Z | [
"pytorch",
"distilbert",
"text-classification",
"tr",
"dataset:emotion (Translated to Turkish)",
"transformers",
"emotion"
]
| text-classification | false | zafercavdar | null | zafercavdar/distilbert-base-turkish-cased-emotion | 14 | 2 | transformers | 9,939 | ---
language:
- tr
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
datasets:
- emotion (Translated to Turkish)
metrics:
- Accuracy, F1 Score
---
# distilbert-base-turkish-cased-emotion
## Model description:
[Distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) finetuned on the emotion dataset (Translated to Turkish via Google Translate API) using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-turkish-cased-emotion](https://huggingface.co/zafercavdar/distilbert-base-turkish-cased-emotion) | 83.25 | 83.17 | 232.197 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",
model='zafercavdar/distilbert-base-turkish-cased-emotion',
return_all_scores=True)
prediction = classifier("Bu kütüphaneyi seviyorum, en iyi yanı kolay kullanımı.", )
print(prediction)
"""
Output:
[
[
{'label': 'sadness', 'score': 0.0026786490343511105},
{'label': 'joy', 'score': 0.6600754261016846},
{'label': 'love', 'score': 0.3203163146972656},
{'label': 'anger', 'score': 0.004358913749456406},
{'label': 'fear', 'score': 0.002354539930820465},
{'label': 'surprise', 'score': 0.010216088965535164}
]
]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Eval results
```json
{
'eval_accuracy': 0.8325,
'eval_f1': 0.8317301441160213,
'eval_loss': 0.5021793842315674,
'eval_runtime': 8.6167,
'eval_samples_per_second': 232.108,
'eval_steps_per_second': 3.714
}
``` |
mwong/climatebert-base-f-fever-evidence-related | 9dbf55fb72a17bb33566384e44dd32876e2228fa | 2022-06-24T03:31:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:mwong/fever-evidence-related",
"transformers",
"text classification",
"fact checking",
"license:mit"
]
| text-classification | false | mwong | null | mwong/climatebert-base-f-fever-evidence-related | 14 | 1 | transformers | 9,940 | ---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# FeverBert-related
FeverBert-related is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 91.23% with test dataset "mwong/fever-evidence-related". Using pretrained ClimateBert-f model, the classifier head is trained on Fever dataset. |
RajaRang/distilbert-base-uncased-finetuned-emotion | b4b63d9f2005928910132fb323f15f8b6b545b44 | 2022-04-21T14:43:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | RajaRang | null | RajaRang/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,941 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251264359849074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8002 | 1.0 | 250 | 0.3094 | 0.9065 | 0.9038 |
| 0.2409 | 2.0 | 500 | 0.2183 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Sarim24/distilbert-base-uncased-finetuned-emotion | 045e24c0fd3ba6e841e9e7ff371e3e99e09baf4d | 2022-07-13T13:03:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Sarim24 | null | Sarim24/distilbert-base-uncased-finetuned-emotion | 14 | 1 | transformers | 9,942 | |
mrm8488/convnext-tiny-finetuned-beans | 66af9fdbba4365ace630be92b147e3bc9a2c5e8d | 2022-04-25T13:32:06.000Z | [
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | mrm8488 | null | mrm8488/convnext-tiny-finetuned-beans | 14 | 1 | transformers | 9,943 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: convnext-tiny-finetuned-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9609375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-finetuned-beans
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1255
- Accuracy: 0.9609

## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7171
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 37 | 0.6175 | 0.8828 |
| No log | 2.0 | 74 | 0.2307 | 0.9609 |
| 0.5237 | 3.0 | 111 | 0.1406 | 0.9531 |
| 0.5237 | 4.0 | 148 | 0.1165 | 0.9688 |
| 0.5237 | 5.0 | 185 | 0.1255 | 0.9609 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
UT/BRTW | 83508ba3e76bdb08b6612f690ed701336858bb38 | 2022-04-25T17:24:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | UT | null | UT/BRTW | 14 | null | transformers | 9,944 | Entry not found |
yihsuan/best_model_0426_small | 3e9347373c4981807c9cff6a2de6816a1755bea2 | 2022-04-27T06:04:35.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"zh",
"transformers",
"summarization",
"mT5",
"autotrain_compatible"
]
| summarization | false | yihsuan | null | yihsuan/best_model_0426_small | 14 | null | transformers | 9,945 | ---
tags:
- summarization
- mT5
language:
- zh
widget:
- text: "專家稱維康桑格研究所(Wellcome Sanger Institute)的上述研究發現「令人震驚」而且「發人深省」。基因變異指關於我們身體成長和管理的相關指令,也就是DNA當中發生的變化。長期以來,變異一直被當作癌症的根源,但是數十年來關於變異是否對衰老有重要影響一直存在爭論。桑格研究所的研究人員說他們得到了「第一個試驗性證據」,證明了兩者的關係。他們分析了預期壽命各異的物種基因變異的不同速度。研究人員分析了貓、黑白疣猴、狗、雪貂、長頸鹿、馬、人、獅子、裸鼴鼠、兔子、老鼠、環尾狐猴和老虎等十幾種動物的DNA。發表在《自然》雜誌上的研究顯示,老鼠在短暫的生命當中每年經歷了將近800次變異,老鼠的壽命一般不到4年。"
--- |
dmjimenezbravo/electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish | f4d475bae8125d2d4d38b0e72b4df972f9a1f9f2 | 2022-04-27T17:01:40.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | dmjimenezbravo | null | dmjimenezbravo/electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish | 14 | null | transformers | 9,946 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-restaurant-sentiment-analysis-usElectionTweets1Jul11Nov-spanish
This model is a fine-tuned version of [mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis](https://huggingface.co/mrm8488/electricidad-small-finetuned-restaurant-sentiment-analysis) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3534
- Accuracy: 0.7585
- F1: 0.7585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.8145 | 1.0 | 1222 | 0.7033 | 0.7168 | 0.7168 |
| 0.7016 | 2.0 | 2444 | 0.5936 | 0.7731 | 0.7731 |
| 0.6183 | 3.0 | 3666 | 0.5190 | 0.8046 | 0.8046 |
| 0.5516 | 4.0 | 4888 | 0.4678 | 0.8301 | 0.8301 |
| 0.4885 | 5.0 | 6110 | 0.3670 | 0.8713 | 0.8713 |
| 0.4353 | 6.0 | 7332 | 0.3119 | 0.8987 | 0.8987 |
| 0.3957 | 7.0 | 8554 | 0.2908 | 0.9084 | 0.9084 |
| 0.3386 | 8.0 | 9776 | 0.2108 | 0.9348 | 0.9348 |
| 0.2976 | 9.0 | 10998 | 0.1912 | 0.9422 | 0.9422 |
| 0.2828 | 10.0 | 12220 | 0.1496 | 0.9591 | 0.9591 |
| 0.243 | 11.0 | 13442 | 0.1326 | 0.9639 | 0.9639 |
| 0.2049 | 12.0 | 14664 | 0.1249 | 0.9693 | 0.9693 |
| 0.2041 | 13.0 | 15886 | 0.1049 | 0.9752 | 0.9752 |
| 0.1855 | 14.0 | 17108 | 0.0816 | 0.9798 | 0.9798 |
| 0.1637 | 15.0 | 18330 | 0.0733 | 0.9836 | 0.9836 |
| 0.1531 | 16.0 | 19552 | 0.0577 | 0.9880 | 0.9880 |
| 0.1221 | 17.0 | 20774 | 0.0581 | 0.9895 | 0.9895 |
| 0.1207 | 18.0 | 21996 | 0.0463 | 0.9903 | 0.9903 |
| 0.1152 | 19.0 | 23218 | 0.0472 | 0.9908 | 0.9908 |
| 0.1028 | 20.0 | 24440 | 0.0356 | 0.9936 | 0.9936 |
| 0.1027 | 21.0 | 25662 | 0.0278 | 0.9957 | 0.9957 |
| 0.0915 | 22.0 | 26884 | 0.0344 | 0.9946 | 0.9946 |
| 0.0887 | 23.0 | 28106 | 0.0243 | 0.9954 | 0.9954 |
| 0.0713 | 24.0 | 29328 | 0.0208 | 0.9969 | 0.9969 |
| 0.0749 | 25.0 | 30550 | 0.0198 | 0.9964 | 0.9964 |
| 0.0699 | 26.0 | 31772 | 0.0153 | 0.9969 | 0.9969 |
| 0.0567 | 27.0 | 32994 | 0.0144 | 0.9972 | 0.9972 |
| 0.0613 | 28.0 | 34216 | 0.0105 | 0.9982 | 0.9982 |
| 0.0567 | 29.0 | 35438 | 0.0117 | 0.9982 | 0.9982 |
| 0.0483 | 30.0 | 36660 | 0.0072 | 0.9985 | 0.9985 |
| 0.0469 | 31.0 | 37882 | 0.0063 | 0.9987 | 0.9987 |
| 0.0485 | 32.0 | 39104 | 0.0067 | 0.9985 | 0.9985 |
| 0.0464 | 33.0 | 40326 | 0.0020 | 0.9995 | 0.9995 |
| 0.0472 | 34.0 | 41548 | 0.0036 | 0.9995 | 0.9995 |
| 0.0388 | 35.0 | 42770 | 0.0016 | 0.9995 | 0.9995 |
| 0.0248 | 36.0 | 43992 | 0.0047 | 0.9990 | 0.9990 |
| 0.0396 | 37.0 | 45214 | 0.0004 | 0.9997 | 0.9997 |
| 0.0331 | 38.0 | 46436 | 0.0020 | 0.9995 | 0.9995 |
| 0.0292 | 39.0 | 47658 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 40.0 | 48880 | 0.0001 | 1.0 | 1.0 |
| 0.0285 | 41.0 | 50102 | 0.0000 | 1.0 | 1.0 |
| 0.0319 | 42.0 | 51324 | 0.0000 | 1.0 | 1.0 |
| 0.0244 | 43.0 | 52546 | 0.0000 | 1.0 | 1.0 |
| 0.0261 | 44.0 | 53768 | 0.0001 | 1.0 | 1.0 |
| 0.0256 | 45.0 | 54990 | 0.0000 | 1.0 | 1.0 |
| 0.0258 | 46.0 | 56212 | 0.0000 | 1.0 | 1.0 |
| 0.0173 | 47.0 | 57434 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 48.0 | 58656 | 0.0000 | 1.0 | 1.0 |
| 0.0241 | 49.0 | 59878 | 0.0000 | 1.0 | 1.0 |
| 0.019 | 50.0 | 61100 | 0.0000 | 1.0 | 1.0 |
| 0.0184 | 51.0 | 62322 | 0.0000 | 1.0 | 1.0 |
| 0.0139 | 52.0 | 63544 | 0.0000 | 1.0 | 1.0 |
| 0.0159 | 53.0 | 64766 | 0.0000 | 1.0 | 1.0 |
| 0.0119 | 54.0 | 65988 | 0.0000 | 1.0 | 1.0 |
| 0.0253 | 55.0 | 67210 | 0.0000 | 1.0 | 1.0 |
| 0.0166 | 56.0 | 68432 | 0.0000 | 1.0 | 1.0 |
| 0.0125 | 57.0 | 69654 | 0.0000 | 1.0 | 1.0 |
| 0.0155 | 58.0 | 70876 | 0.0000 | 1.0 | 1.0 |
| 0.0106 | 59.0 | 72098 | 0.0000 | 1.0 | 1.0 |
| 0.0083 | 60.0 | 73320 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/Concise | a2dfe50d64305ef9c5e596fa4137202c47fce0dd | 2022-05-01T01:33:50.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | BigSalmon | null | BigSalmon/Concise | 14 | null | transformers | 9,947 | how to start prompt:
```
wordy:
```
example:
```
wordy: the ndp has turned into the country's darling of the young.
```
output:
```
the ndp is youth-driven.
``` |
patrickvonplaten/wav2vec2-conformer-rope-large-960h-ft-4-gram | b285424449c311a867bb87d57645c8d58a527149 | 2022-05-24T11:10:41.000Z | [
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-conformer-rope-large-960h-ft-4-gram | 14 | null | transformers | 9,948 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-conformer-rope-large-960h-ft-4-gram
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.57
---
# Wav2Vec2-Conformer-Large-960h with Rotary Position Embeddings + 4-gram
This model is identical to [Facebook's wav2vec2-conformer-rope-large-960h-ft](https://huggingface.co/facebook/wav2vec2-conformer-rope-large-960h-ft), but is
augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used.
## Evaluation
This code snippet shows how to evaluate **patrickvonplaten/wav2vec2-conformer-rope-large-960h-ft-4-gram** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torch
from jiwer import wer
model_id = "patrickvonplaten/wav2vec2-conformer-rope-large-960h-ft-4-gram"
librispeech_eval = load_dataset("librispeech_asr", "other", split="test")
model = AutoModelForCTC.from_pretrained(model_id).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt")
inputs = {k: v.to("cuda") for k,v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy()).text[0]
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print(wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.88 | 3.57 | |
nloc2578/pegasus-question-generator | cdca805f3ef6323f09db4ad4b19a2c5a5f363f67 | 2022-05-02T15:09:37.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | nloc2578 | null | nloc2578/pegasus-question-generator | 14 | null | transformers | 9,949 | ---
tags:
- generated_from_trainer
model-index:
- name: pegasus-question-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-question-generator
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.161 | 0.5 | 4000 | 2.0183 |
| 1.9513 | 1.0 | 8000 | 1.8741 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
enimai/mbart-large-50-paraphrase-finetuned-for-de | 9f477e27191e453e4e04454925fce94f8c6670e3 | 2022-05-03T16:53:49.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | enimai | null | enimai/mbart-large-50-paraphrase-finetuned-for-de | 14 | null | transformers | 9,950 | ---
license: apache-2.0
---
|
svalabs/twitter-xlm-roberta-crypto-spam | 101aee11e6fce970064619ac728a0ad759acbe7c | 2022-05-04T13:47:31.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | svalabs | null | svalabs/twitter-xlm-roberta-crypto-spam | 14 | null | transformers | 9,951 | Entry not found |
CarlCochet/trajectory-transformer-hopper-medium-expert-v2 | 81424ac7fe308cef789534a7987fd4c2efe681a7 | 2022-05-12T17:04:17.000Z | [
"pytorch",
"trajectory_transformer",
"feature-extraction",
"transformers",
"license:mit"
]
| feature-extraction | false | CarlCochet | null | CarlCochet/trajectory-transformer-hopper-medium-expert-v2 | 14 | null | transformers | 9,952 | ---
license: mit
---
|
airi/bert-finetuned-ner | d9a98ad8fd36df5e4e7bc07d52319eb7c5fcb90d | 2022-05-08T08:59:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | airi | null | airi/bert-finetuned-ner | 14 | null | transformers | 9,953 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [Davlan/bert-base-multilingual-cased-ner-hrl](https://huggingface.co/Davlan/bert-base-multilingual-cased-ner-hrl) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 87 | 0.0594 | 0.7613 | 0.8779 | 0.8154 | 0.9873 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Jiexing/spider_relation_t5_3b-3392 | 904314792e4bdc0ba6149ae30c719e149339cb2e | 2022-05-08T04:04:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Jiexing | null | Jiexing/spider_relation_t5_3b-3392 | 14 | null | transformers | 9,954 | Entry not found |
Jeevesh8/bert_ft_qqp-5 | cf32fe6192160ff900765602e6d152c71ef88afe | 2022-05-09T09:42:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/bert_ft_qqp-5 | 14 | null | transformers | 9,955 | Entry not found |
peter2000/roberta-base-finetuned-osdg | 36863192d713fc744c592267badb4ecdaf71e726 | 2022-05-12T15:58:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | peter2000 | null | peter2000/roberta-base-finetuned-osdg | 14 | null | transformers | 9,956 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-osdg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-osdg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8286
- eval_Acc: 0.7746
- eval_runtime: 27.6597
- eval_samples_per_second: 116.126
- eval_steps_per_second: 3.652
- epoch: 1.0
- step: 904
## Model description
The model is trained on the data from OSDG (https://osdg.ai/)
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
uaritm/df_lik_n_mg_221 | 7f9e49cf776e679c026cac2aca76d8317e6b2c22 | 2022-05-09T18:37:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"uk",
"transformers",
"russian",
"ukrainian",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | uaritm | null | uaritm/df_lik_n_mg_221 | 14 | null | transformers | 9,957 | ---
language: ["ru", "uk"]
tags:
- russian
- ukrainian
license: mit
---
# A little about the model
The model is trained to answer questions about health topics (Open-book question answering-comprehend).
cointegrated/rut5-base-multitask
For training, a compact T5 model was used: cointegrated/rut5-base-multitask
The training was conducted on a small set
out of 220 thousand pairs of question-answer sentences, so it still does not work as correctly as we would like.
The model is not a medical application and it is strongly discouraged to use the model for medical purposes! |
eslamxm/mt5-base-finetuned-urdu | 533d98cdd18b22febc73192a11289c87bd79e7fe | 2022-06-14T18:12:44.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"urdu",
"ur",
"Abstractive Summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | eslamxm | null | eslamxm/mt5-base-finetuned-urdu | 14 | null | transformers | 9,958 | ---
license: apache-2.0
tags:
- summarization
- urdu
- ur
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-finetuned-urdu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-urdu
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on Urdu subset the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8954
- Rouge-1: 28.84
- Rouge-2: 13.87
- Rouge-l: 25.63
- Gen Len: 19.0
- Bertscore: 71.31
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 3.6205 | 1.0 | 2114 | 3.0871 | 26.45 | 11.4 | 23.26 | 19.0 | 70.76 |
| 3.2169 | 2.0 | 4228 | 2.9830 | 27.19 | 11.91 | 23.95 | 19.0 | 70.92 |
| 3.0787 | 3.0 | 6342 | 2.9284 | 27.9 | 12.57 | 24.62 | 18.99 | 71.13 |
| 2.9874 | 4.0 | 8456 | 2.9049 | 28.28 | 12.91 | 24.99 | 18.99 | 71.28 |
| 2.9232 | 5.0 | 10570 | 2.8954 | 28.65 | 13.17 | 25.32 | 18.99 | 71.39 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CEBaB/bert-base-uncased.CEBaB.sa.3-class.exclusive.seed_77 | ac500cd5e07cb38da8f2cd805c74931314516d74 | 2022-05-11T01:22:57.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | CEBaB | null | CEBaB/bert-base-uncased.CEBaB.sa.3-class.exclusive.seed_77 | 14 | null | transformers | 9,959 | Entry not found |
ncfrey/ChemGPT-19M | 08876002a3a2e6f47cc454ba4153c6cffb6dd206 | 2022-06-15T15:19:57.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"chemistry"
]
| text-generation | false | ncfrey | null | ncfrey/ChemGPT-19M | 14 | null | transformers | 9,960 | ---
tags:
- chemistry
---
# ChemGPT 19M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
``` |
beltran/finetuning-sentiment-model-3000-samples | 79d04ba49ea0e08c173e6f6c21ec693cb0c3ef95 | 2022-05-13T10:29:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | beltran | null | beltran/finetuning-sentiment-model-3000-samples | 14 | null | transformers | 9,961 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8566666666666667
- name: F1
type: f1
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3185
- Accuracy: 0.8567
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
edwardgowsmith/roberta-base-unigram-prime | dce8af31825cf0e44c8e8fb69b976b4e38acb718 | 2022-05-13T12:31:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | edwardgowsmith | null | edwardgowsmith/roberta-base-unigram-prime | 14 | null | transformers | 9,962 | Entry not found |
buehlpa/bert-finetuned-ner | 26bb02495e29b23b169d080feac9355cb8f0cf2f | 2022-05-14T11:06:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | buehlpa | null | buehlpa/bert-finetuned-ner | 14 | null | transformers | 9,963 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9308580858085809
- name: Recall
type: recall
value: 0.9493436553349041
- name: F1
type: f1
value: 0.9400099983336112
- name: Accuracy
type: accuracy
value: 0.9862541943839407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9309
- Recall: 0.9493
- F1: 0.9400
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0855 | 1.0 | 1756 | 0.0632 | 0.9191 | 0.9386 | 0.9287 | 0.9832 |
| 0.0414 | 2.0 | 3512 | 0.0572 | 0.9264 | 0.9475 | 0.9368 | 0.9855 |
| 0.0198 | 3.0 | 5268 | 0.0607 | 0.9309 | 0.9493 | 0.9400 | 0.9863 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
reallycarlaost/emobert-valence-5 | 160420ad23055846e681c23c5993467400e342f7 | 2022-05-14T17:18:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | reallycarlaost | null | reallycarlaost/emobert-valence-5 | 14 | null | transformers | 9,964 | Entry not found |
Tititun/consumer_super | ed4d3071ffc7b1e3068970eebd6f01c395c2cd8c | 2022-05-16T04:46:12.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Tititun | null | Tititun/consumer_super | 14 | null | transformers | 9,965 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: consumer_super
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# consumer_super
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
MrBananaHuman/prompt_gpt2 | 8216283d4d6c768a60964e1371d2f2865aa4d5fb | 2022-05-17T12:25:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | MrBananaHuman | null | MrBananaHuman/prompt_gpt2 | 14 | null | transformers | 9,966 | Entry not found |
charsiu/g2p_multilingual_byT5_tiny_16_layers | 4f069a19c28b65f9caea87d2aa8b3869742f0a26 | 2022-05-19T05:02:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | charsiu | null | charsiu/g2p_multilingual_byT5_tiny_16_layers | 14 | null | transformers | 9,967 | Entry not found |
MrVicente/bart_qa_assistant | 5e454d81c6e3659aafc9b64bade437d6be63ce7f | 2022-05-24T18:40:43.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:eli5",
"dataset:stackexchange(pets, cooking, gardening, diy, crafts)",
"transformers",
"generative qa",
"autotrain_compatible"
]
| text2text-generation | false | MrVicente | null | MrVicente/bart_qa_assistant | 14 | null | transformers | 9,968 | ---
language: en
tags:
- generative qa
datasets:
- eli5
- stackexchange(pets, cooking, gardening, diy, crafts)
---
Work by [Frederico Vicente](https://huggingface.co/mrvicente) & [Diogo Tavares](https://huggingface.co/d-c-t). We finetuned BART Large for the task of generative question answering. It was trained on eli5, askScience and stackexchange using the following forums: pets, cooking, gardening, diy, crafts.
Check demo: https://huggingface.co/spaces/unlisboa/bart_qa_assistant
### Usage
```python
from transformers import (
BartForConditionalGeneration,
BartTokenizer
)
import torch
import json
def read_json_file_2_dict(filename, store_dir='.'):
with open(f'{store_dir}/{filename}', 'r', encoding='utf-8') as file:
return json.load(file)
def get_device():
# If there's a GPU available...
if torch.cuda.is_available():
device = torch.device("cuda")
n_gpus = torch.cuda.device_count()
first_gpu = torch.cuda.get_device_name(0)
print(f'There are {n_gpus} GPU(s) available.')
print(f'GPU gonna be used: {first_gpu}')
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
return device
model_name = 'unlisboa/bart_qa_assistant'
tokenizer = BartTokenizer.from_pretrained(model_name)
device = get_device()
model = BartForConditionalGeneration.from_pretrained(model_name).to(device)
model.eval()
model_input = tokenizer(question, truncation=True, padding=True, return_tensors="pt")
generated_answers_encoded = model.generate(input_ids=model_input["input_ids"].to(device),attention_mask=model_input["attention_mask"].to(device),
force_words_ids=None,
min_length=1,
max_length=100,
do_sample=True,
early_stopping=True,
num_beams=4,
temperature=1.0,
top_k=None,
top_p=None,
# eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=2,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True)
response = tokenizer.batch_decode(generated_answers_encoded['sequences'], skip_special_tokens=True,clean_up_tokenization_spaces=True)
print(response)
```
Have fun! |
aiola/roberta-large-corener | 2c46b4e538608a6e7246e0ad1e70d3da1f069022 | 2022-07-03T14:16:17.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:Ontonotes",
"dataset:CoNLL04",
"transformers",
"NER",
"named entity recognition",
"RE",
"relation extraction",
"entity mention detection",
"EMD",
"coreference resolution",
"license:afl-3.0",
"autotrain_compatible"
]
| fill-mask | false | aiola | null | aiola/roberta-large-corener | 14 | null | transformers | 9,969 | ---
language:
- en
tags:
- NER
- named entity recognition
- RE
- relation extraction
- entity mention detection
- EMD
- coreference resolution
license: afl-3.0
datasets:
- Ontonotes
- CoNLL04
---
# CoReNer
## Demo
We released an online demo so you can easily play with the model. Check it out: [http://corener-demo.aiola-lab.com](http://corener-demo.aiola-lab.com).
The demo uses the [aiola/roberta-base-corener](https://huggingface.co/aiola/roberta-base-corener) model.
## Model description
A multi-task model for named-entity recognition, relation extraction, entity mention detection, and coreference resolution.
We model NER as a span classification task and relation extraction as a multi-label classification of (NER) span tuples.
Similarly, model EMD as a span classification task and CR as a binary classification of (EMD) span tuples.
To construct the CR clusters, we keep the top antecedent of each mention, then compute the connected components of the mentions' undirected graph.
The model was trained to recognize:
- Entity types: GPE, ORG, PERSON, DATE, NORP, CARDINAL, MONEY, PERCENT, WORK_OF_ART, ORDINAL, EVENT, LOC, TIME, FAC, QUANTITY, LAW, PRODUCT, LANGUAGE.
- Relation types: Kill, Live_In, Located_In, OrgBased_In, Work_For.
## Usage example
See additional details and usage examples at: https://github.com/aiola-lab/corener.
```python
import json
from transformers import AutoTokenizer
from corener.models import Corener, ModelOutput
from corener.data import MTLDataset
from corener.utils.prediction import convert_model_output
tokenizer = AutoTokenizer.from_pretrained("aiola/roberta-large-corener")
model = Corener.from_pretrained("aiola/roberta-large-corener")
model.eval()
examples = [
"Apple Park is the corporate headquarters of Apple Inc., located in Cupertino, California, United States. It was opened to employees in April 2017, while construction was still underway, and superseded the original headquarters at 1 Infinite Loop, which opened in 1993."
]
dataset = MTLDataset(
types=model.config.types,
tokenizer=tokenizer,
train_mode=False,
)
dataset.read_dataset(examples)
example = dataset.get_example(0) # get first example
output: ModelOutput = model(
input_ids=example.encodings,
context_masks=example.context_masks,
entity_masks=example.entity_masks,
entity_sizes=example.entity_sizes,
entity_spans=example.entity_spans,
entity_sample_masks=example.entity_sample_masks,
inference=True,
)
print(json.dumps(convert_model_output(output=output, batch=example, dataset=dataset), indent=2))
``` |
Andyrasika/distilbert-base-uncased-finetuned-emotion | 56e71813e2cc36c55060cf8d4f48ea8ea6937b6a | 2022-05-27T16:20:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Andyrasika | null | Andyrasika/distilbert-base-uncased-finetuned-emotion | 14 | 1 | transformers | 9,970 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9175
- name: F1
type: f1
value: 0.917868093658934
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2301
- Accuracy: 0.9175
- F1: 0.9179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8386 | 1.0 | 250 | 0.3275 | 0.904 | 0.9011 |
| 0.2572 | 2.0 | 500 | 0.2301 | 0.9175 | 0.9179 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
reemalyami/AraRoBERTa_Poem_classification | 26e33b8ca0e2307b612d3125e91c94757a97e3d6 | 2022-05-29T21:00:22.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | reemalyami | null | reemalyami/AraRoBERTa_Poem_classification | 14 | null | transformers | 9,971 | Entry not found |
shafin/distilbert-base-uncased-finetuned-cust-similarity-2 | 46e79ba2e1ba26576317dbd8d31f8f492a4e5e38 | 2022-05-29T12:12:09.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | shafin | null | shafin/distilbert-base-uncased-finetuned-cust-similarity-2 | 14 | 1 | sentence-transformers | 9,972 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# shafin/distilbert-base-uncased-finetuned-cust-similarity-2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('shafin/distilbert-base-uncased-finetuned-cust-similarity-2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=shafin/distilbert-base-uncased-finetuned-cust-similarity-2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4375 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Dense({'in_features': 256, 'out_features': 128, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
PrimeQA/t5-base-table-question-generator | 3e90424ecfb46ee16447a4addda0808c2c2c130a | 2022-06-29T13:20:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | PrimeQA | null | PrimeQA/t5-base-table-question-generator | 14 | null | transformers | 9,973 | ---
license: apache-2.0
---
# Model description
This is an [t5-base](https://huggingface.co/t5-base) model, finetuned to generate questions given a table using [WikiSQL](https://huggingface.co/datasets/wikisql) dataset. It was trained to take the SQL, answer and column header of a table as input to generate questions. For more information check our T3QA [paper](https://aclanthology.org/2021.emnlp-main.342/) from EMNLP 2021.
# Overview
*Language model*: t5-base \
*Language*: English \
*Task*: Table Question Generation \
*Data*: WikiSQL
# Intented use and limitations
One can use this model to generate questions given a table. Biases associated with pre-training of T5 and WikiSQL dataset may be present.
## Usage
One can use this model directly in the [PrimeQA](https://github.com/primeqa/primeqa) framework as in this example [notebook](https://github.com/primeqa/primeqa/blob/tableqg/notebooks/qg/tableqg_inference.ipynb).
## Citation
```bibtex
@inproceedings{chemmengath2021topic,
title={Topic Transferable Table Question Answering},
author={Chemmengath, Saneem and Kumar, Vishwajeet and
Bharadwaj, Samarth and Sen, Jaydeep and
Canim, Mustafa and Chakrabarti, Soumen and
Gliozzo, Alfio and Sankaranarayanan, Karthik},
booktitle={Proceedings of the 2021 Conference on
Empirical Methods in Natural Language Processing},
pages={4159--4172},
year={2021}
}
```
|
Cole/distilbert-base-uncased-finetuned-emotion | 4e25c8b6380612d6786807fbd41a834d7be3a2f7 | 2022-07-26T16:51:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Cole | null | Cole/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,974 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9274111800508488
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2148
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8308 | 1.0 | 250 | 0.3053 | 0.9075 | 0.9053 |
| 0.2421 | 2.0 | 500 | 0.2148 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
arrandi/distilbert-base-uncased-finetuned-emotion | 4d6ed1093496c2ccd9c8b34e54fb2c2d8b9fbe70 | 2022-05-31T15:20:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | arrandi | null | arrandi/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,975 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9341704717427723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.934
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2606 | 1.0 | 250 | 0.1780 | 0.9285 | 0.9284 |
| 0.1486 | 2.0 | 500 | 0.1652 | 0.934 | 0.9342 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
caldana/distilbert-base-uncased-finetuned-emotion | d218b2bb15f2de4f6b6f3f7a6814f4d9a8f1c58c | 2022-05-31T23:07:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | caldana | null | caldana/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,976 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.927055679622598
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8251 | 1.0 | 250 | 0.3264 | 0.9015 | 0.8981 |
| 0.2534 | 2.0 | 500 | 0.2236 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
KM4STfulltext/SSCI-SciBERT-e4 | 2f2970495c81d40d07513154fe54319f0df8f9b4 | 2022-06-01T09:25:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | KM4STfulltext | null | KM4STfulltext/SSCI-SciBERT-e4 | 14 | 1 | transformers | 9,977 | ---
license: apache-2.0
---
# SSCI-BERT: A pretrained language model for social scientific text
## Introduction
The research for social science texts needs the support natural language processing tools.
The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of scientific texts in social science.
We used the abstract of social science research as the training set. Based on the deep language model framework of BERT, we constructed [SSCI-BERT and SSCI-SciBERT](https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) pre-training language models by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).
We designed four downstream tasks of Text Classification on different social scientific article corpus to verify the performance of the model.
- SSCI-BERT and SSCI-SciBERT are trained on the abstract of articles published in SSCI journals from 1986 to 2021. The training set involved in the experiment included a total of `503910614 words`.
- Based on the idea of Domain-Adaptive Pretraining, `SSCI-BERT` and `SSCI-SciBERT` combine a large amount of abstracts of scientific articles based on the BERT structure, and continue to train the BERT and SSCI-SciBERT models respectively to obtain pre-training models for the automatic processing of Social science research texts.
## News
- 2022-03-24 : SSCIBERT and SSCI-SciBERT has been put forward for the first time.
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain SSCI-BERT and SSCI-SciBERT models online.
- SSCI-BERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
```
- SSCI-SciBERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
```
### Download Models
- The version of the model we provide is `PyTorch`.
### From Huggingface
- Download directly through Huggingface's official website.
- [KM4STfulltext/SSCI-BERT-e2](https://huggingface.co/KM4STfulltext/SSCI-BERT-e2)
- [KM4STfulltext/SSCI-SciBERT-e2](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e2)
- [KM4STfulltext/SSCI-BERT-e4 ](https://huggingface.co/KM4STfulltext/SSCI-BERT-e4)
- [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4)
### From Google Drive
We have put the model on Google Drive for users.
| Model | DATASET(year) | Base Model |
| ------------------------------------------------------------ | ------------- | ---------------------- |
| [SSCI-BERT-e2](https://drive.google.com/drive/folders/1xEDnovlwGO2JxqCaf3rdjS2cB6DOxhj4?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e2](https://drive.google.com/drive/folders/16DtIvnHvbrR_92MwgthRRsULW6An9te1?usp=sharing) (recommended) | 1986-2021 | Scibert-scivocab-cased |
| [SSCI-BERT-e4](https://drive.google.com/drive/folders/1sr6Av8p904Jrjps37g7E8aj4HnAHXSxW?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e4](https://drive.google.com/drive/folders/1ty-b4TIFu8FbilgC4VcI7Bgn_O5MDMVe?usp=sharing) | 1986-2021 | Scibert-scivocab-cased |
## Evaluation & Results
- We use SSCI-BERT and SSCI-SciBERT to perform Text Classificationon different social science research corpus. The experimental results are as follows. Relevant data sets are available for download in the **Verification task datasets** folder of this project.
#### JCR Title Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 28.43 | 22.06 | 21.86 |
| Scibert-scivocab-cased | 38.48 | 33.89 | 33.92 |
| SSCI-BERT-e2 | 40.43 | 35.37 | 35.33 |
| SSCI-SciBERT-e2 | 41.35 | 37.27 | 37.25 |
| SSCI-BERT-e4 | 40.65 | 35.49 | 35.40 |
| SSCI-SciBERT-e4 | 41.13 | 36.96 | 36.94 |
| Support | 2300 | 2300 | 2300 |
#### JCR Abstract Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 48.59 | 42.8 | 42.82 |
| Scibert-scivocab-cased | 55.59 | 51.4 | 51.81 |
| SSCI-BERT-e2 | 58.05 | 53.31 | 53.73 |
| SSCI-SciBERT-e2 | 59.95 | 56.51 | 57.12 |
| SSCI-BERT-e4 | 59.00 | 54.97 | 55.59 |
| SSCI-SciBERT-e4 | 60.00 | 56.38 | 56.90 |
| Support | 2200 | 2200 | 2200 |
#### JCR Mixed Titles and Abstracts Dataset
| **Model** | **accuracy** | **macro avg** | **weighted avg** |
| ---------------------- | ------------ | -------------- | ----------------- |
| Bert-base-cased | 58.24 | 57.27 | 57.25 |
| Scibert-scivocab-cased | 59.58 | 58.65 | 58.68 |
| SSCI-BERT-e2 | 60.89 | 60.24 | 60.30 |
| SSCI-SciBERT-e2 | 60.96 | 60.54 | 60.51 |
| SSCI-BERT-e4 | 61.00 | 60.48 | 60.43 |
| SSCI-SciBERT-e4 | 61.24 | 60.71 | 60.75 |
| Support | 4500 | 4500 | 4500 |
#### SSCI Abstract Structural Function Recognition (Classify Dataset)
| | Bert-base-cased | SSCI-BERT-e2 | SSCI-BERT-e4 | support |
| ------------ | -------------------------- | ------------------- | ------------------- | ----------- |
| B | 63.77 | 64.29 | 64.63 | 224 |
| P | 53.66 | 57.14 | 57.99 | 95 |
| M | 87.63 | 88.43 | 89.06 | 323 |
| R | 86.81 | 88.28 | **88.47** | 419 |
| C | 78.32 | 79.82 | 78.95 | 316 |
| accuracy | 79.59 | 80.9 | 80.97 | 1377 |
| macro avg | 74.04 | 75.59 | 75.82 | 1377 |
| weighted avg | 79.02 | 80.32 | 80.44 | 1377 |
| | **Scibert-scivocab-cased** | **SSCI-SciBERT-e2** | **SSCI-SciBERT-e4** | **support** |
| B | 69.98 | **70.95** | **70.95** | 224 |
| P | 58.89 | **60.12** | 58.96 | 95 |
| M | 89.37 | **90.12** | 88.11 | 323 |
| R | 87.66 | 88.07 | 87.44 | 419 |
| C | 80.7 | 82.61 | **82.94** | 316 |
| accuracy | 81.63 | **82.72** | 82.06 | 1377 |
| macro avg | 77.32 | **78.37** | 77.68 | 1377 |
| weighted avg | 81.6 | **82.58** | 81.92 | 1377 |
## Cited
- If our content is helpful for your research work, please quote our research in your article.
- If you want to quote our research, you can use this url (https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) as an alternative before our paper is published.
## Disclaimer
- The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment.
- **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
- SSCI-BERT was trained based on [BERT-Base-Cased]([google-research/bert: TensorFlow code and pre-trained models for BERT (github.com)](https://github.com/google-research/bert)).
- SSCI-SciBERT was trained based on [scibert-scivocab-cased]([allenai/scibert: A BERT model for scientific text. (github.com)](https://github.com/allenai/scibert))
|
MadFace/t5-cnn | 0214db82331d763598634e9d5144e9c4814ebcb4 | 2022-06-05T06:11:02.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | MadFace | null | MadFace/t5-cnn | 14 | null | transformers | 9,978 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-cnn
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4562
- Rouge1: 25.1836
- Rouge2: 12.0806
- Rougel: 20.818
- Rougelsum: 23.6868
- Gen Len: 18.9986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.4286 | 1.0 | 50000 | 1.4562 | 25.1836 | 12.0806 | 20.818 | 23.6868 | 18.9986 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RUCAIBox/mtl-open-dialog | f1cb197098502d76475494fccc335ce303789356 | 2022-06-27T02:27:15.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"conversational",
"license:apache-2.0"
]
| text2text-generation | false | RUCAIBox | null | RUCAIBox/mtl-open-dialog | 14 | null | transformers | 9,979 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- conversational
pipeline_tag: text2text-generation
widget:
- text: "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?"
example_title: "Example1"
- text: "Given the dialog: i used to scare for darkness [X_SEP] it feels like hitting to blank wall when i see the darkness [SEP] Oh ya? I don't really see how [SEP] dont you feel so.. its a wonder [SEP] I do actually hit blank walls a lot of times but i get by"
example_title: "Example2"
---
# MTL-open-dialog
The MTL-open-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-open-dialog is supervised pre-trained using a mixture of labeled open dialogue system datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-open-dialog is specially designed for open dialogue system (conversation) tasks, such as chitchat (PersonaChat, DailyDialog), knowledge grounded conversation (Topical-Chat, Wizard of Wikipedia) and visual dialog (DSTC7-AVSD).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-open-dialog")
>>> inputs = tokenizer(
... "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Yes he won the Hong Kong Cha Cha championship in 1958']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
prajdabre/morisien_english | 96af0505897f13345c0e220e45f8679297c580ed | 2022-06-07T09:55:36.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | prajdabre | null | prajdabre/morisien_english | 14 | 1 | transformers | 9,980 | ---
license: mit
widget:
- text: Kan bann mor pou releve, bann dimoun pa pou marie. </s> <2cr>
---
|
csebuetnlp/banglishbert_generator | 6175a1e02224b65e8bce257c85becdf3e5f00872 | 2022-06-07T12:12:59.000Z | [
"pytorch",
"electra",
"fill-mask",
"bn",
"en",
"arxiv:2101.00204",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | csebuetnlp | null | csebuetnlp/banglishbert_generator | 14 | null | transformers | 9,981 | ---
language:
- bn
- en
licenses:
- cc-by-nc-sa-4.0
---
# BanglishBERT
This repository contains the pretrained generator checkpoint of the model [**BanglishBERT**](). This is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) generator model pretrained with the Masked Language Modeling (MLM) objective on large amounts of Bengali and English corpora.
**Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer).
## Using this model for MLM in `transformers` (tested on 4.11.0.dev0)
```python
from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="csebuetnlp/banglishbert_generator",
tokenizer="csebuetnlp/banglishbert_generator"
)
print(
fill_mask(
normalize(f"Paris is the {fill_mask.tokenizer.mask_token} of France.")
)
)
```
If you use this model, please cite the following paper:
```
@inproceedings{bhattacharjee-etal-2022-banglabert,
title = {BanglaBERT: Lagnuage Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla},
author = "Bhattacharjee, Abhik and
Hasan, Tahmid and
Mubasshir, Kazi and
Islam, Md. Saiful and
Uddin, Wasi Ahmad and
Iqbal, Anindya and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the North American Chapter of the Association for Computational Linguistics: NAACL 2022",
month = july,
year = {2022},
url = {https://arxiv.org/abs/2101.00204},
eprinttype = {arXiv},
eprint = {2101.00204}
}
```
If you use the normalization module, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
|
ObamaCodingReal/DialoGPT-large-NickGERai | a3bb895c4ba66a4bd4bdd38c6d241920595ebe8a | 2022-06-09T01:41:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | ObamaCodingReal | null | ObamaCodingReal/DialoGPT-large-NickGERai | 14 | null | transformers | 9,982 | ---
tags:
- conversational
---
# horrendous amalgamation of several friends |
ThaisBeham/distilbert-base-uncased-finetuned-fira | fc92333ed41991cf2b52ced672a372215e7db5e2 | 2022-06-07T10:44:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | ThaisBeham | null | ThaisBeham/distilbert-base-uncased-finetuned-fira | 14 | null | transformers | 9,983 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-fira
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-fira
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 2.9963 |
| No log | 2.0 | 400 | 2.7457 |
| 3.0576 | 3.0 | 600 | 2.7687 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
eslamxm/mbert2mbert-finetuned-ar-xlsum | c8091bfbed1fb632bce6692c6df15a2fe6e1c2ce | 2022-06-14T19:25:14.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"ar",
"mbert",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | eslamxm | null | eslamxm/mbert2mbert-finetuned-ar-xlsum | 14 | null | transformers | 9,984 | ---
tags:
- summarization
- ar
- encoder-decoder
- mbert
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mbert2mbert-finetuned-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert2mbert-finetuned-ar-xlsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xlsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kevincstowe/concept2seq-srl | 3b6d01d56c5c143139efe602e5a3eeec5078acb5 | 2022-06-08T13:35:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | kevincstowe | null | kevincstowe/concept2seq-srl | 14 | null | transformers | 9,985 | Entry not found |
Clody0071/camembert-base-finetuned-paraphrase | 65a4510c8267ef797e59f2758d295e90f2caad1b | 2022-06-10T18:05:49.000Z | [
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"dataset:pawsx",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Clody0071 | null | Clody0071/camembert-base-finetuned-paraphrase | 14 | null | transformers | 9,986 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pawsx
metrics:
- accuracy
- f1
model-index:
- name: camembert-base-finetuned-paraphrase
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: pawsx
type: pawsx
args: fr
metrics:
- name: Accuracy
type: accuracy
value: 0.9085
- name: F1
type: f1
value: 0.9088724090678741
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-paraphrase
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the pawsx dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Accuracy: 0.9085
- F1: 0.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3918 | 1.0 | 772 | 0.3211 | 0.869 | 0.8696 |
| 0.2103 | 2.0 | 1544 | 0.2448 | 0.9075 | 0.9077 |
| 0.1622 | 3.0 | 2316 | 0.2577 | 0.9055 | 0.9059 |
| 0.1344 | 4.0 | 3088 | 0.2708 | 0.9085 | 0.9089 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
louisdeco/camembert-base-finetuned-RankLineCause | 5dc6b4968f1de481df82fc7541a6f089186587a7 | 2022-06-11T12:50:01.000Z | [
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | louisdeco | null | louisdeco/camembert-base-finetuned-RankLineCause | 14 | null | transformers | 9,987 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: camembert-base-finetuned-RankLineCause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-RankLineCause
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3138
- Accuracy: 0.8152
- F1: 0.8297
- Recall: 0.8152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.3471 | 1.0 | 10019 | 0.3191 | 0.8156 | 0.8137 | 0.8156 |
| 0.317 | 2.0 | 20038 | 0.3138 | 0.8152 | 0.8297 | 0.8152 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
spuun/kekbot-beta-4-medium | e8af8b0e4e12da1680a287720590a9dccdd28d68 | 2022-06-12T21:36:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational",
"license:cc-by-nc-sa-4.0",
"co2_eq_emissions"
]
| conversational | false | spuun | null | spuun/kekbot-beta-4-medium | 14 | null | transformers | 9,988 | ---
language:
- en
tags:
- conversational
co2_eq_emissions:
emissions: "840"
source: "mlco2.github.io"
training_type: "fine-tuning"
geographical_location: "West Java, Indonesia"
hardware_used: "1 Tesla P100"
license: cc-by-nc-sa-4.0
widget:
- text: "Hey kekbot! What's up?"
example_title: "Asking what's up"
- text: "Hey kekbot! How r u?"
example_title: "Asking how he is"
---
> THIS MODEL IS IN PUBLIC BETA, PLEASE DO NOT EXPECT ANY FORM OF STABILITY IN ITS CURRENT STATE.
# Art Union server chatbot
Based on a DialoGPT-medium (`kekbot-beta-3-medium`) model, fine-tuned to a select subset (65k<= messages) of Art Union's general-chat channel chat history.
### Current issues
(Which hopefully will be fixed in future iterations) Include, but not limited to:
- Limited turns, after ~20 turns output may break for no apparent reason.
- Inconsistent variance, acts like an overfitted model from time to time for no reason whatsoever.
|
ahmeddbahaa/xlmroberta2xlmroberta-finetuned-ar-wikilingua | 3104a1c9a13d4b1ff19358730efc192f64ba2abe | 2022-06-14T20:55:49.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"ar",
"roberta",
"xlmroberta2xlmroberta",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/xlmroberta2xlmroberta-finetuned-ar-wikilingua | 14 | null | transformers | 9,989 | ---
tags:
- summarization
- ar
- encoder-decoder
- roberta
- xlmroberta2xlmroberta
- Abstractive Summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: xlmroberta2xlmroberta-finetuned-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta2xlmroberta-finetuned-ar-wikilingua
This model is a fine-tuned version of [](https://huggingface.co/) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7757
- Rouge-1: 11.2
- Rouge-2: 1.96
- Rouge-l: 10.28
- Gen Len: 19.8
- Bertscore: 66.27
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 8.03 | 1.0 | 312 | 7.3208 | 0.19 | 0.0 | 0.19 | 20.0 | 54.84 |
| 7.2309 | 2.0 | 624 | 7.1107 | 1.17 | 0.03 | 1.16 | 20.0 | 60.0 |
| 7.0752 | 3.0 | 936 | 7.0061 | 2.58 | 0.15 | 2.55 | 20.0 | 63.52 |
| 6.7538 | 4.0 | 1248 | 6.4189 | 5.75 | 0.46 | 5.55 | 19.95 | 62.83 |
| 6.1513 | 5.0 | 1560 | 5.8402 | 8.46 | 1.04 | 8.08 | 19.2 | 64.25 |
| 5.6639 | 6.0 | 1872 | 5.3938 | 8.62 | 1.17 | 8.16 | 19.28 | 64.81 |
| 5.2857 | 7.0 | 2184 | 5.0719 | 9.34 | 1.41 | 8.61 | 19.71 | 65.29 |
| 5.027 | 8.0 | 2496 | 4.9047 | 10.42 | 1.52 | 9.57 | 19.57 | 65.75 |
| 4.8747 | 9.0 | 2808 | 4.8032 | 10.79 | 1.71 | 9.91 | 19.42 | 66.2 |
| 4.7855 | 10.0 | 3120 | 4.7757 | 11.01 | 1.73 | 10.04 | 19.55 | 66.24 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ghadeermobasher/BC4CHEMD-Chem-Original-SciBERT-384 | f8709ce37dd3363a38cff652a238c7c776bb4bc7 | 2022-06-14T18:32:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Original-SciBERT-384 | 14 | null | transformers | 9,990 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Original-BlueBERT-384 | d8462a1232961bfc71af46c551594f58f32c7978 | 2022-06-14T19:06:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Original-BlueBERT-384 | 14 | null | transformers | 9,991 | Entry not found |
Deborah/bertimbau-finetuned-pos-accelerate3 | b4053b2abfc50289fa98730a0e36a08cafb6dda3 | 2022-06-14T22:33:09.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Deborah | null | Deborah/bertimbau-finetuned-pos-accelerate3 | 14 | null | transformers | 9,992 | Entry not found |
dexay/reDs3others | e8581f17923feaea2f3b0de88f39ed6cc2ead9ed | 2022-06-14T23:58:57.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | dexay | null | dexay/reDs3others | 14 | null | transformers | 9,993 | Entry not found |
Alireza1044/mobilebert_sst2 | bf7878338b499f4f45db7e68191f5167babdf6e9 | 2022-06-15T11:12:07.000Z | [
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Alireza1044 | null | Alireza1044/mobilebert_sst2 | 14 | null | transformers | 9,994 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9036697247706422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1730
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft | e227d7005daa2ea6f44280accbb8b9c0c04295a1 | 2022-07-09T06:15:06.000Z | [
"pytorch",
"swinv2",
"transformers"
]
| null | false | microsoft | null | microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft | 14 | null | transformers | 9,995 | Entry not found |
huggingtweets/rihanna | 0a7a27c0995ad549b19b2ef425ff4314a50ab81b | 2022-06-20T17:21:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/rihanna | 14 | null | transformers | 9,996 | ---
language: en
thumbnail: http://www.huggingtweets.com/rihanna/1655745706641/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1133109643734130688/BwioAwkz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rihanna</div>
<div style="text-align: center; font-size: 14px;">@rihanna</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rihanna.
| Data | Rihanna |
| --- | --- |
| Tweets downloaded | 3175 |
| Retweets | 224 |
| Short tweets | 735 |
| Tweets kept | 2216 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/menb3plh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rihanna's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3o6y7vof) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3o6y7vof/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rihanna')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Nonzerophilip/bert-finetuned-ner_swedish_test | f44ec223952c85df17b1b6f316edfe40b42280ab | 2022-06-17T08:57:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Nonzerophilip | null | Nonzerophilip/bert-finetuned-ner_swedish_test | 14 | null | transformers | 9,997 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_swedish_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_test
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0916
- Precision: 0.6835
- Recall: 0.6391
- F1: 0.6606
- Accuracy: 0.9788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 128 | 0.0980 | 0.6121 | 0.5976 | 0.6048 | 0.9749 |
| No log | 2.0 | 256 | 0.0914 | 0.7255 | 0.6568 | 0.6894 | 0.9779 |
| No log | 3.0 | 384 | 0.0916 | 0.6835 | 0.6391 | 0.6606 | 0.9788 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
valurank/finetuned-distilbert-adult-content-detection | 5383ff56775d99bc851ead9622eccb6103918c8d | 2022-06-25T06:58:36.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
]
| text-classification | false | valurank | null | valurank/finetuned-distilbert-adult-content-detection | 14 | null | transformers | 9,998 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: finetuned-distilbert-adult-content-detection
results: []
---
### finetuned-distilbert-news-article-catgorization
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the adult_content dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0065
- F1_score(weighted): 0.90
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
The model was trained on some subset of the adult_content dataset and it was validated on the remaining subset of the data
### Training procedure
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 5
- eval_batch_size: 5
- seed: 17
- optimizer: AdamW(lr=1e-5 and epsilon=1e-08)
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0
- num_epochs: 2
### Training results
| Training Loss | Epoch | Validation Loss | f1 score |
|:-------------:|:-----:|:---------------: |:------:|
| 0.1414 | 1.0 | 0.4585 | 0.9058 |
| 0.1410 | 2.0 | 0.4584 | 0.9058 |
|
linuxcoder/distilbert-base-uncased-finetuned-emotion | f23ffd948b098c7c013a3145f4050a018a66114b | 2022-07-13T12:59:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | linuxcoder | null | linuxcoder/distilbert-base-uncased-finetuned-emotion | 14 | 1 | transformers | 9,999 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.924047984825329
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2294
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3316 | 0.9025 | 0.8985 |
| No log | 2.0 | 500 | 0.2294 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.