modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
uygarkurt/distilbert-base-uncased-finetuned-emotion | f70e9c275228f3dd1f25519de5a13aec140715a9 | 2022-05-25T21:20:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | uygarkurt | null | uygarkurt/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,600 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.9200387095502811
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.92
- F1: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8096 | 1.0 | 250 | 0.3081 | 0.9005 | 0.8974 |
| 0.2404 | 2.0 | 500 | 0.2156 | 0.92 | 0.9200 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v2 | 18189b5e44cdd4140cd0ad423201f418c2c5a9c4 | 2022-05-29T23:09:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v2 | 9 | null | transformers | 12,601 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v2
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Precision: 0.9776
- Recall: 0.9593
- F1: 0.9683
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.038 | 1.0 | 625 | 0.0091 | 0.9694 | 0.9426 | 0.9559 | 0.9974 |
| 0.0079 | 2.0 | 1250 | 0.0074 | 0.9776 | 0.9593 | 0.9683 | 0.9981 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mynti/plainly-v1 | de5b5e3fb5bd8e201b225cd2a5490a2061eff7ca | 2022-05-30T18:11:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | mynti | null | mynti/plainly-v1 | 9 | null | transformers | 12,602 | ## Plainly
A model for simple english. |
tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1 | 098286f01186108244a79a1eb8bc87bcedb48bdc | 2022-05-28T00:01:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1 | 9 | null | transformers | 12,603 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0179
- Precision: 0.9249
- Recall: 0.8776
- F1: 0.9006
- Accuracy: 0.9942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 245 | 0.0244 | 0.9252 | 0.8120 | 0.8649 | 0.9924 |
| No log | 2.0 | 490 | 0.0179 | 0.9249 | 0.8776 | 0.9006 | 0.9942 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2 | d5cc7618ce419ee8fa2d79bce998957ce2c48cdb | 2022-05-27T23:53:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2 | 9 | null | transformers | 12,604 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0187
- Precision: 0.9160
- Recall: 0.8752
- F1: 0.8952
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 245 | 0.0250 | 0.8990 | 0.8225 | 0.8591 | 0.9919 |
| No log | 2.0 | 490 | 0.0187 | 0.9160 | 0.8752 | 0.8952 | 0.9939 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-base-coptic-upos | c146b85bb6de22d18e15e6a7d9756cd86a711ea0 | 2022-05-28T09:24:01.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"cop",
"dataset:universal_dependencies",
"transformers",
"coptic",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| token-classification | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-coptic-upos | 9 | null | transformers | 12,605 | ---
language:
- "cop"
tags:
- "coptic"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ⲧⲉⲛⲟⲩⲇⲉⲛ̄ⲟⲩⲟⲉⲓⲛϩ︤ⲙ︥ⲡϫⲟⲉⲓⲥ·"
- text: "ⲙⲟⲟϣⲉϩⲱⲥϣⲏⲣⲉⲙ̄ⲡⲟⲩⲟⲉⲓⲛ·"
---
# deberta-base-coptic-upos
## Model Description
This is a DeBERTa(V2) model pre-trained with [UD_Coptic](https://universaldependencies.org/cop/) for POS-tagging and dependency-parsing, derived from [deberta-base-coptic](https://huggingface.co/KoichiYasuoka/deberta-base-coptic). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-coptic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-coptic-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-base-coptic-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
danielhou13/longformer-finetuned_v2_cogs402 | 1356be7ad14f916d4cde36d8a60416ebe2864e05 | 2022-05-30T08:03:51.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
]
| text-classification | false | danielhou13 | null | danielhou13/longformer-finetuned_v2_cogs402 | 9 | null | transformers | 12,606 | Entry not found |
aioxlabs/dvoice-languageid | 7e3b8805ffb5a4e1d399194473d156762d6f6511 | 2022-05-29T06:19:05.000Z | [
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"license:apache-2.0"
]
| audio-classification | false | aioxlabs | null | aioxlabs/dvoice-languageid | 9 | null | speechbrain | 12,607 | ---
language: multilingual
thumbnail:
tags:
- audio-classification
- speechbrain
- embeddings
- Language
- Identification
- pytorch
- ECAPA-TDNN
- TDNN
license: "apache-2.0"
datasets:
- VoxLingua107
metrics:
- Accuracy
--- |
siegelou/bert-finetuned-ner | 17dcf1e4734098a6f30eacc6a1f05eb9744ff7da | 2022-05-29T11:35:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | siegelou | null | siegelou/bert-finetuned-ner | 9 | null | transformers | 12,608 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9368054403715376
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9436137331885389
- name: Accuracy
type: accuracy
value: 0.9858862659680933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0660
- Precision: 0.9368
- Recall: 0.9505
- F1: 0.9436
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0858 | 1.0 | 1756 | 0.0682 | 0.9246 | 0.9387 | 0.9316 | 0.9833 |
| 0.0425 | 2.0 | 3512 | 0.0579 | 0.9351 | 0.9504 | 0.9427 | 0.9862 |
| 0.0189 | 3.0 | 5268 | 0.0660 | 0.9368 | 0.9505 | 0.9436 | 0.9859 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
CodeMaestro/DialoGPT-small-TChalla | 0b28b71d0986a653f7f73ab04a1bda702cd83fdd | 2022-05-30T10:48:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | CodeMaestro | null | CodeMaestro/DialoGPT-small-TChalla | 9 | null | transformers | 12,609 | ---
tags:
- conversational
---
#TChalla DialoGPT model |
sahn/distilbert-base-uncased-finetuned-imdb-subtle | 49c74d6b5910cc82880c41d99c13ab3d6e6c8b53 | 2022-05-30T04:50:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | sahn | null | sahn/distilbert-base-uncased-finetuned-imdb-subtle | 9 | null | transformers | 12,610 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-imdb-subtle
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-subtle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5219
- Accuracy: 0.9074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
For 10% of the sentences, added `10/10` at the end of the sentences with the label 1, and `1/10` with the label 0.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2308 | 1.0 | 1250 | 0.3615 | 0.8866 |
| 0.1381 | 2.0 | 2500 | 0.2195 | 0.9354 |
| 0.068 | 3.0 | 3750 | 0.4582 | 0.9014 |
| 0.0395 | 4.0 | 5000 | 0.4480 | 0.9164 |
| 0.0202 | 5.0 | 6250 | 0.5219 | 0.9074 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
daniel780/finetuning-sentiment-model-3000-samples | f8e346695740fb64a882dc7497747419b697c513 | 2022-05-31T05:39:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:amazon_polarity",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | daniel780 | null | daniel780/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,611 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.8066666666666666
- name: F1
type: f1
value: 0.8079470198675497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4356
- Accuracy: 0.8067
- F1: 0.8079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ruselkomp/sber-framebank-50size-2 | 6dfa29f523c72ef5c49ee9eac29133266db10530 | 2022-05-31T15:59:07.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | ruselkomp | null | ruselkomp/sber-framebank-50size-2 | 9 | null | transformers | 12,612 | ---
tags:
- generated_from_trainer
model-index:
- name: sber-framebank-50size-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sber-framebank-50size-2
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0623 | 1.0 | 11307 | 1.0958 |
| 0.8145 | 2.0 | 22614 | 1.1778 |
| 0.6168 | 3.0 | 33921 | 1.3736 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
ceggian/bart_post_trained_reddit_batch128 | 0404f4d269c777415f6daf8fe7222142d47f502a | 2022-06-01T08:55:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | ceggian | null | ceggian/bart_post_trained_reddit_batch128 | 9 | null | transformers | 12,613 | Entry not found |
mrm8488/gpt-neo-2.7B-8bit | 90de1a2a60b4f802af88bd886f9ce1da69ecf5fa | 2022-06-01T15:32:07.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | mrm8488 | null | mrm8488/gpt-neo-2.7B-8bit | 9 | 1 | transformers | 12,614 | Entry not found |
lmqg/bart-large-subjqa-books | bb56fb8aa09bf5310ba64311ccd783b6689cf6bd | 2022-06-02T14:43:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | lmqg | null | lmqg/bart-large-subjqa-books | 9 | null | transformers | 12,615 | Entry not found |
kktoto/tiny_toto_punctuator | 0bc068a4d0d3196166831231c5caddd1a8318798 | 2022-06-05T02:31:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/tiny_toto_punctuator | 9 | null | transformers | 12,616 | Entry not found |
philschmid/DistilBERT-Banking77 | a5a37e8c0840ba725201378aa56e66018ae45d16 | 2022-06-24T14:31:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:banking77",
"transformers",
"autotrain",
"model-index",
"co2_eq_emissions"
]
| text-classification | false | philschmid | null | philschmid/DistilBERT-Banking77 | 9 | null | transformers | 12,617 | ---
tags: autotrain
language: en
widget:
- text: I am still waiting on my card?
datasets:
- banking77
model-index:
- name: BERT-Banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: BANKING77
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 91.99
- name: Macro F1
type: macro-f1
value: 91.99
- name: Weighted F1
type: weighted-f1
value: 91.99
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.922077922077922
verified: true
- name: Precision Macro
type: precision
value: 0.9256326708783564
verified: true
- name: Precision Micro
type: precision
value: 0.922077922077922
verified: true
- name: Precision Weighted
type: precision
value: 0.9256326708783565
verified: true
- name: Recall Macro
type: recall
value: 0.922077922077922
verified: true
- name: Recall Micro
type: recall
value: 0.922077922077922
verified: true
- name: Recall Weighted
type: recall
value: 0.922077922077922
verified: true
- name: F1 Macro
type: f1
value: 0.9221617304411865
verified: true
- name: F1 Micro
type: f1
value: 0.922077922077922
verified: true
- name: F1 Weighted
type: f1
value: 0.9221617304411867
verified: true
- name: loss
type: loss
value: 0.31692808866500854
verified: true
co2_eq_emissions: 5.632805352029529
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 940131045
- CO2 Emissions (in grams): 5.632805352029529
## Validation Metrics
- Loss: 0.3392622470855713
- Accuracy: 0.9199410609037328
- Macro F1: 0.9199390885956755
- Micro F1: 0.9199410609037327
- Weighted F1: 0.9198140295005729
- Macro Precision: 0.9235531521509113
- Micro Precision: 0.9199410609037328
- Weighted Precision: 0.9228777883152248
- Macro Recall: 0.919570805773292
- Micro Recall: 0.9199410609037328
- Weighted Recall: 0.9199410609037328
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/philschmid/autotrain-does-it-work-940131045
```
Or Python API:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/DistilBERT-Banking77'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
``` |
RUCAIBox/mtl-story | 40a4d255cb917ba4612d6961a41184cd2926b955 | 2022-06-27T02:27:29.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
]
| text2text-generation | false | RUCAIBox | null | RUCAIBox/mtl-story | 9 | null | transformers | 12,618 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the story title: I think all public schools should have a uniform dress code."
example_title: "Example1"
- text: "Given the story title: My girlfriend and I decided to move to a new state. We packed everything in our cars and drove there."
example_title: "Example2"
---
# MTL-story
The MTL-story model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-story is supervised pre-trained using a mixture of labeled story generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-story is specially designed for story generation tasks, such as ROCStories and WritingPrompts.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-story")
>>> inputs = tokenizer(
... "Given the story title: I think all public schools should have a uniform dress code.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs, max_length=1024)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["I don't know about you, but I don't think it would be a good idea to have a uniform dress code in public schools. I think it's a waste of time and money. If you're going to have uniform dress codes, you need to make sure that the uniforms are appropriate for the school and that the students are comfortable in them. If they're not comfortable, then they shouldn't be allowed to wear them."]
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mtl-question-answering | bc7e726c06d3b760d7e6819247a8ce8cdbe94745 | 2022-06-27T02:27:20.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
]
| text2text-generation | false | RUCAIBox | null | RUCAIBox/mtl-question-answering | 9 | null | transformers | 12,619 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Answer the following question: From which country did Angola achieve independence in 1975?"
example_title: "Example1"
- text: "Answer the following question: what is ce certified [X_SEP] The CE marking is the manufacturer's declaration that the product meets the requirements of the applicable EC directives. Officially, CE is an abbreviation of Conformite Conformité, europeenne Européenne Meaning. european conformity"
example_title: "Example2"
---
# MTL-question-answering
The MTL-question-answering model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-question-answering is supervised pre-trained using a mixture of labeled question answering datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-question-answering is specially designed for question answering tasks, such as reading comprehension (SQuAD), conversational question answering (CoQA) and closed-book question-answering (Natural Questions).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-answering")
>>> inputs = tokenizer(
... "Answer the following question: From which country did Angola achieve independence in 1975?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Portugal']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
Jeevesh8/init_bert_ft_qqp-2 | 88199737daec076bc43926030349a8a6e3287bf8 | 2022-06-02T12:37:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-2 | 9 | null | transformers | 12,620 | Entry not found |
Jeevesh8/init_bert_ft_qqp-1 | f7a2fb5ce719ac903ffd3b5b500262046abb1319 | 2022-06-02T12:37:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-1 | 9 | null | transformers | 12,621 | Entry not found |
Jeevesh8/init_bert_ft_qqp-8 | 724d573b631976170a7504f969bfd7cced7e9258 | 2022-06-02T12:37:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-8 | 9 | null | transformers | 12,622 | Entry not found |
Jeevesh8/init_bert_ft_qqp-3 | 9d948e08358914a74874f4eb5cf3c57a71e94045 | 2022-06-02T12:37:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-3 | 9 | null | transformers | 12,623 | Entry not found |
Jeevesh8/init_bert_ft_qqp-4 | c94e68de0e093b7aee98c4bc134f6eb76a0c6504 | 2022-06-02T12:37:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-4 | 9 | null | transformers | 12,624 | Entry not found |
Jeevesh8/init_bert_ft_qqp-5 | 578cc4e2c37914baa3ac728fc4234ff1c129ec47 | 2022-06-02T12:37:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-5 | 9 | null | transformers | 12,625 | Entry not found |
Jeevesh8/init_bert_ft_qqp-7 | 933e4d15c36040fd937caeea8723820bbaeca1ab | 2022-06-02T12:38:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-7 | 9 | null | transformers | 12,626 | Entry not found |
Jeevesh8/init_bert_ft_qqp-10 | 85dcd3d2c38a5e71b605474e285b9bbb1b121c9d | 2022-06-02T12:37:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-10 | 9 | null | transformers | 12,627 | Entry not found |
Jeevesh8/init_bert_ft_qqp-9 | 3738e5bf4ff0956b53bb322dda7014c693ecdd8a | 2022-06-02T12:37:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-9 | 9 | null | transformers | 12,628 | Entry not found |
Jeevesh8/init_bert_ft_qqp-6 | 0c42c612266c732b13e08c8829450408fef0324a | 2022-06-02T12:37:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-6 | 9 | null | transformers | 12,629 | Entry not found |
Jeevesh8/init_bert_ft_qqp-0 | 347f6a3eeb6992f6d9cc1647834c293e9934248c | 2022-06-02T12:37:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-0 | 9 | null | transformers | 12,630 | Entry not found |
Jeevesh8/init_bert_ft_qqp-16 | dd2887d6ba950e43507ef5733e649a19c8ef8e36 | 2022-06-02T12:41:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-16 | 9 | null | transformers | 12,631 | Entry not found |
Jeevesh8/init_bert_ft_qqp-11 | 915e890007b1785e9bfa88786e165d48cc85c4b7 | 2022-06-02T12:39:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-11 | 9 | null | transformers | 12,632 | Entry not found |
Jeevesh8/init_bert_ft_qqp-13 | e73c0644815c694eaa7a01b02c4b03833558f633 | 2022-06-02T12:39:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-13 | 9 | null | transformers | 12,633 | Entry not found |
Jeevesh8/init_bert_ft_qqp-12 | 54a2e17800cb6dd41993fc1c501e74fed711cb76 | 2022-06-02T12:41:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-12 | 9 | null | transformers | 12,634 | Entry not found |
Jeevesh8/init_bert_ft_qqp-14 | 8a10dcb1ef0483531a42defd28531e94f2480e34 | 2022-06-02T12:39:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-14 | 9 | null | transformers | 12,635 | Entry not found |
Jeevesh8/init_bert_ft_qqp-18 | e6c2e590644483acab50aab8eb908cf662d858ce | 2022-06-02T12:39:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-18 | 9 | null | transformers | 12,636 | Entry not found |
Jeevesh8/init_bert_ft_qqp-20 | 5c448e8c04264db46c52556db0b2bd2a9237c5c8 | 2022-06-02T12:39:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-20 | 9 | null | transformers | 12,637 | Entry not found |
Jeevesh8/init_bert_ft_qqp-21 | 5de6a21340b75619113f3242c83d775732609067 | 2022-06-02T12:39:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-21 | 9 | null | transformers | 12,638 | Entry not found |
Jeevesh8/init_bert_ft_qqp-24 | 82f03f254d910d6abb8b54673ba52f4d4d16e42b | 2022-06-02T12:39:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-24 | 9 | null | transformers | 12,639 | Entry not found |
Jeevesh8/init_bert_ft_qqp-17 | a4ac389a3423dba0036456b1051d98542e97af54 | 2022-06-02T12:40:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-17 | 9 | null | transformers | 12,640 | Entry not found |
Jeevesh8/init_bert_ft_qqp-23 | f5eeb9b94ea2e4b49114eccc964d83c10288f8bd | 2022-06-02T12:40:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-23 | 9 | null | transformers | 12,641 | Entry not found |
Jeevesh8/init_bert_ft_qqp-29 | 63af9c38551cccd2e5d8c42781e26109ce475a9b | 2022-06-02T12:39:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-29 | 9 | null | transformers | 12,642 | Entry not found |
Jeevesh8/init_bert_ft_qqp-30 | afacc540d014d50e957947a379991b9b7ab28f8b | 2022-06-02T12:39:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-30 | 9 | null | transformers | 12,643 | Entry not found |
Jeevesh8/init_bert_ft_qqp-27 | 62a06d82e4a48fd86ee4a565a1511b7c7e2126c8 | 2022-06-02T12:39:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-27 | 9 | null | transformers | 12,644 | Entry not found |
Jeevesh8/init_bert_ft_qqp-25 | 411d83e71e6ab008f1a37692975ac1fd9477b2b6 | 2022-06-02T12:39:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-25 | 9 | null | transformers | 12,645 | Entry not found |
Jeevesh8/init_bert_ft_qqp-26 | a0fefc8ce5a113d203bbe6ecc060967d6718999e | 2022-06-02T12:40:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-26 | 9 | null | transformers | 12,646 | Entry not found |
Jeevesh8/init_bert_ft_qqp-40 | 9d032ef7b58b123d2763e8789ec1f424a804d181 | 2022-06-02T12:39:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-40 | 9 | null | transformers | 12,647 | Entry not found |
Jeevesh8/init_bert_ft_qqp-34 | 1b4ba345ce6e28361789608bed5ae120df714ff9 | 2022-06-02T12:39:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-34 | 9 | null | transformers | 12,648 | Entry not found |
Jeevesh8/init_bert_ft_qqp-60 | 60010aa746dbfd973a9c10ca03c27d0639783804 | 2022-06-02T12:39:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-60 | 9 | null | transformers | 12,649 | Entry not found |
Jeevesh8/init_bert_ft_qqp-57 | 0efe5b17b3a8f1795dca2ead725c1e9f9d1cee36 | 2022-06-02T12:39:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-57 | 9 | null | transformers | 12,650 | Entry not found |
Jeevesh8/init_bert_ft_qqp-54 | bbaed929df585d681b3c96ae25d6a3907ae50d32 | 2022-06-02T12:39:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-54 | 9 | null | transformers | 12,651 | Entry not found |
Jeevesh8/init_bert_ft_qqp-55 | 01eed3cc7e735fa2f9accc6d28c3092ed3c50222 | 2022-06-02T12:41:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-55 | 9 | null | transformers | 12,652 | Entry not found |
Jeevesh8/init_bert_ft_qqp-52 | 038c33de491e995aa97d15abd229aa8bc1177e63 | 2022-06-02T12:39:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-52 | 9 | null | transformers | 12,653 | Entry not found |
Jeevesh8/init_bert_ft_qqp-32 | 35ba2ede05bd404d33d06b2c48046269e0a20ff6 | 2022-06-02T12:39:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-32 | 9 | null | transformers | 12,654 | Entry not found |
Jeevesh8/init_bert_ft_qqp-31 | ddb83088cfafbf32d96680649a4418433d164baa | 2022-06-02T12:39:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-31 | 9 | null | transformers | 12,655 | Entry not found |
Jeevesh8/init_bert_ft_qqp-37 | cd796932624b2969733f3319b77c85bfb50fc808 | 2022-06-02T12:39:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-37 | 9 | null | transformers | 12,656 | Entry not found |
Jeevesh8/init_bert_ft_qqp-35 | e9f3636862944fea82867e40928c4fd648063d4c | 2022-06-02T12:39:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-35 | 9 | null | transformers | 12,657 | Entry not found |
Jeevesh8/init_bert_ft_qqp-38 | a730ced4ef2682ce13a1694bda319a70d40f7d2f | 2022-06-02T12:40:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-38 | 9 | null | transformers | 12,658 | Entry not found |
Jeevesh8/init_bert_ft_qqp-90 | b397b8896546cc57b0bb9bb6611ace98dcefb3bd | 2022-06-02T12:41:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-90 | 9 | null | transformers | 12,659 | Entry not found |
Jeevesh8/init_bert_ft_qqp-96 | f5b47d54d4b4de44ccf9acc4f0d368f13c15b367 | 2022-06-02T12:41:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-96 | 9 | null | transformers | 12,660 | Entry not found |
nboudad/Maghriberta | 808f42d1f927a56e83e38decbab88c092560a121 | 2022-06-03T21:52:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | nboudad | null | nboudad/Maghriberta | 9 | null | transformers | 12,661 | ---
widget:
- text: "جاب ليا <mask> ."
example_title: "example1"
- text: "مشيت نجيب <mask> فالفرماسيان ."
example_title: "example2"
--- |
kktoto/tiny_lr_kk | 60c5fb922054b76afbac2c51bd767d9ca727d1c4 | 2022-06-05T13:47:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/tiny_lr_kk | 9 | null | transformers | 12,662 | Entry not found |
roshnir/xlmr-finetuned-mlqa-dev-vi-hi | 064824a943c0761196e182582bca83e5445b7c76 | 2022-06-05T20:37:55.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | roshnir | null | roshnir/xlmr-finetuned-mlqa-dev-vi-hi | 9 | null | transformers | 12,663 | Entry not found |
Gooogr/distilbert-base-uncased-finetuned-clinc | 5f8829242f0b30ddbc2382d17c119803364ad1c8 | 2022-06-06T16:12:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Gooogr | null | Gooogr/distilbert-base-uncased-finetuned-clinc | 9 | null | transformers | 12,664 | Entry not found |
sayakpramanik/distilbert-base-uncased-finetuned-emotion | b2de981a3d79b2fa879578d474fb8565050c2cc0 | 2022-06-06T10:12:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | sayakpramanik | null | sayakpramanik/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,665 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9228534433920637
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2166
- Accuracy: 0.923
- F1: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8472 | 1.0 | 250 | 0.3169 | 0.912 | 0.9105 |
| 0.2475 | 2.0 | 500 | 0.2166 | 0.923 | 0.9229 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Kabir5296/wav2vec2-large-xls-r-300m-turkish-colab | ef067670bfa5d9a3820f21499fb9834e0bf49b80 | 2022-06-20T10:13:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Kabir5296 | null | Kabir5296/wav2vec2-large-xls-r-300m-turkish-colab | 9 | null | transformers | 12,666 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4102
- Wer: 0.3165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9393 | 3.67 | 400 | 0.6784 | 0.7123 |
| 0.4104 | 7.34 | 800 | 0.4521 | 0.4865 |
| 0.1929 | 11.01 | 1200 | 0.4470 | 0.4802 |
| 0.1301 | 14.68 | 1600 | 0.4377 | 0.4384 |
| 0.0999 | 18.35 | 2000 | 0.4391 | 0.4067 |
| 0.0799 | 22.02 | 2400 | 0.4073 | 0.3456 |
| 0.0624 | 25.69 | 2800 | 0.4039 | 0.3286 |
| 0.0491 | 29.36 | 3200 | 0.4102 | 0.3165 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
tolgahanturker/bert-finetuned-ner | df8a2f02956d9ea70228bc2caa7ce460cc381f7f | 2022-06-07T08:14:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tolgahanturker | null | tolgahanturker/bert-finetuned-ner | 9 | null | transformers | 12,667 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9315589353612167
- name: Recall
type: recall
value: 0.9483338943116796
- name: F1
type: f1
value: 0.9398715703444249
- name: Accuracy
type: accuracy
value: 0.9859598516512628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Precision: 0.9316
- Recall: 0.9483
- F1: 0.9399
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0828 | 1.0 | 1756 | 0.0655 | 0.9189 | 0.9359 | 0.9273 | 0.9825 |
| 0.0395 | 2.0 | 3512 | 0.0574 | 0.9226 | 0.9467 | 0.9345 | 0.9855 |
| 0.0187 | 3.0 | 5268 | 0.0636 | 0.9316 | 0.9483 | 0.9399 | 0.9860 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Nehc/FakeMobile | feef8ee79c730700fbea42c34d9d75b0adcfe7b4 | 2022-06-09T13:44:35.000Z | [
"pytorch",
"bert",
"text-classification",
"ru",
"transformers"
]
| text-classification | false | Nehc | null | Nehc/FakeMobile | 9 | null | transformers | 12,668 | ---
language:
- ru
widget:
- text: "[CLS] Какая абонентская плата на тарифе Позвони маме? [SEP]"
metrics:
- loss: 0.704381
- accuracy: 1.000000
---
Start from 'DeepPavlov/rubert-base-cased' and finetuning on DUMBOT fake data (http://dumbot.ru/Home/MobileOperatorRate).
100 epoch
on progress...
|
vaibhavagg303/T5-test2 | 4e1a2cc55350beb56be8486e6c44825e78d62670 | 2022-06-08T11:56:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | vaibhavagg303 | null | vaibhavagg303/T5-test2 | 9 | null | transformers | 12,669 | Entry not found |
KoichiYasuoka/deberta-base-japanese-unidic | 29685f95dc57612442c41291b43a30ee440c4384 | 2022-06-18T14:02:31.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-unidic | 9 | null | transformers | 12,670 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# deberta-base-japanese-unidic
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts with BertJapaneseTokenizer. You can fine-tune `deberta-base-japanese-unidic` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-unidic-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-unidic-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-unidic")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-base-japanese-unidic")
```
[fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required.
|
louisdeco/camembert-base-finetuned-ICDCode_5 | 9767ab4ed3873be97f66940c48959f91103b421d | 2022-06-09T10:18:38.000Z | [
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | louisdeco | null | louisdeco/camembert-base-finetuned-ICDCode_5 | 9 | null | transformers | 12,671 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: camembert-base-finetuned-ICDCode_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-ICDCode_5
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset. It has been trained on a corpus of death certificate. One ICDCode is given for a given cause of death or commorbidities. As it is an important task to be able to predict these ICDCode, I shave trained this model for 8 epochs on 400 000 death causes. Pre-processing of noisy data points was mandatory before tokenization. It allows us to get this accuracy.
It achieves the following results on the evaluation set:
- Loss: 0.6574
- Accuracy: 0.8964
- F1: 0.8750
- Recall: 0.8964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 3.7466 | 1.0 | 4411 | 1.9448 | 0.7201 | 0.6541 | 0.7201 |
| 1.5264 | 2.0 | 8822 | 1.2045 | 0.8134 | 0.7691 | 0.8134 |
| 1.0481 | 3.0 | 13233 | 0.9473 | 0.8513 | 0.8149 | 0.8513 |
| 0.8304 | 4.0 | 17644 | 0.8098 | 0.8718 | 0.8427 | 0.8718 |
| 0.7067 | 5.0 | 22055 | 0.7352 | 0.8834 | 0.8574 | 0.8834 |
| 0.6285 | 6.0 | 26466 | 0.6911 | 0.8898 | 0.8659 | 0.8898 |
| 0.5779 | 7.0 | 30877 | 0.6641 | 0.8958 | 0.8741 | 0.8958 |
| 0.549 | 8.0 | 35288 | 0.6574 | 0.8964 | 0.8750 | 0.8964 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingartists/headie-one | b2b0ec7e7323a2b783fc33138f4b3baff6e1aa09 | 2022-07-16T03:07:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/headie-one",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/headie-one | 9 | null | transformers | 12,672 | ---
language: en
datasets:
- huggingartists/headie-one
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/f803e312226f5034989742ff1fb4b583.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Headie One</div>
<a href="https://genius.com/artists/headie-one">
<div style="text-align: center; font-size: 14px;">@headie-one</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Headie One.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/headie-one).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/headie-one")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3fzj7qkl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Headie One's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1d1n36x9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1d1n36x9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/headie-one')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/headie-one")
model = AutoModelWithLMHead.from_pretrained("huggingartists/headie-one")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
pourmand1376/arabic-quran-nahj-sahife | 8981ee058f95d0d0f7587996abd578e01e597e73 | 2022-06-09T10:18:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"transformers",
"license:gpl-2.0",
"autotrain_compatible"
]
| fill-mask | false | pourmand1376 | null | pourmand1376/arabic-quran-nahj-sahife | 9 | 1 | transformers | 12,673 | ---
license: gpl-2.0
language: ar
---
A model which is jointly trained and fine-tuned on Quran, Saheefa and nahj-al-balaqa. All Datasets are available [Here](https://github.com/language-ml/course-nlp-ir-1-text-exploring/tree/main/exploring-datasets/religious_text). Code will be available soon ...
Some Examples for filling the mask:
- ```
ذَلِكَ [MASK] لَا رَيْبَ فِيهِ هُدًى لِلْمُتَّقِينَ
```
- ```
يَا أَيُّهَا النَّاسُ اعْبُدُوا رَبَّكُمُ الَّذِي خَلَقَكُمْ وَالَّذِينَ مِنْ قَبْلِكُمْ لَعَلَّكُمْ [MASK]
```
This model is fine-tuned on [Bert Base Arabic](https://huggingface.co/asafaya/bert-base-arabic) for 30 epochs. We have used `Masked Language Modeling` to fine-tune the model. Also, after each 5 epochs, we have completely masked the words again for the model to learn the embeddings very well and not overfit the data.
|
ghadeermobasher/WLT-PubMedBERT-NCBI-Disease | 0123dc837d0515f5eb048fa730f4be8b32da9eac | 2022-06-09T11:22:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-PubMedBERT-NCBI-Disease | 9 | null | transformers | 12,674 | Entry not found |
annazdr/xlm-roberta-ecoicop-polish | 48f7116e2902db9ea47382cc9563f4271b8e4bab | 2022-06-14T11:50:15.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | annazdr | null | annazdr/xlm-roberta-ecoicop-polish | 9 | null | transformers | 12,675 | Entry not found |
ghadeermobasher/WLT-PubMedBERT-BC2GM | 0083c6935e7e8712f199c6f41ae43e7128e6bf7a | 2022-06-10T16:29:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-PubMedBERT-BC2GM | 9 | null | transformers | 12,676 | Entry not found |
ghadeermobasher/WLT-SciBERT-BC2GM | 94ede29cab8e82b26693710f398f8d277fde2ead | 2022-06-09T16:35:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-SciBERT-BC2GM | 9 | null | transformers | 12,677 | Entry not found |
ghadeermobasher/WLT-BlueBERT-Linnaeus | 700ede52186810ea689d01c4943123b2cfaa3aa0 | 2022-06-10T14:41:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-BlueBERT-Linnaeus | 9 | null | transformers | 12,678 | Entry not found |
ghadeermobasher/WLT-PubMedBERT-Linnaeus | 4bb5bb64e825a889f1790b188d24ccb69a280d8f | 2022-06-10T11:05:35.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-PubMedBERT-Linnaeus | 9 | null | transformers | 12,679 | Entry not found |
ghadeermobasher/WLT-SciBERT-Linnaeus | 243c0715e31f292c90af66f9709134e6514b466c | 2022-06-10T14:03:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-SciBERT-Linnaeus | 9 | null | transformers | 12,680 | Entry not found |
RomanCast/xlmr-miam-loria-finetuned | 3ddb8e5c581767910d31bb0adcbe5ac1a97d67f7 | 2022-06-09T15:14:27.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"fr",
"transformers"
]
| text-classification | false | RomanCast | null | RomanCast/xlmr-miam-loria-finetuned | 9 | null | transformers | 12,681 | ---
language:
- fr
--- |
huggingtweets/mrbeast | 919d541945ad3427edad2af87e2e251772892867 | 2022-06-09T16:16:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/mrbeast | 9 | null | transformers | 12,682 | ---
language: en
thumbnail: http://www.huggingtweets.com/mrbeast/1654791349427/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/994592419705274369/RLplF55e_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MrBeast</div>
<div style="text-align: center; font-size: 14px;">@mrbeast</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MrBeast.
| Data | MrBeast |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 157 |
| Short tweets | 713 |
| Tweets kept | 2376 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lj98epf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mrbeast's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2o881m6c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2o881m6c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mrbeast')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fabianmmueller/deep-haiku-gpt-j-6b-8bit | 2ded223ddecb39c99a050ba63d9de22edd3e8311 | 2022-06-13T15:40:47.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | fabianmmueller | null | fabianmmueller/deep-haiku-gpt-j-6b-8bit | 9 | null | transformers | 12,683 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deep-haiku-gpt-j-6b-8bit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deep-haiku-gpt-j-6b-8bit
This model is a fine-tuned version of [gpt-j-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit) on the [haiku](https://huggingface.co/datasets/statworx/haiku) dataset.
## Model description
The model is a fine-tuned version of GPT-J-6B-8Bit for generation of [Haikus](https://en.wikipedia.org/wiki/Haiku). The model, data and training procedure is inspired by a [blog post by Robert A. Gonsalves](https://towardsdatascience.com/deep-haiku-teaching-gpt-j-to-compose-with-syllable-patterns-5234bca9701).
We used the same multitask training approach as in der post, but significantly extended the dataset (almost double the size of the original one). A prepared version of the dataset can be found [here](https://huggingface.co/datasets/statworx/haiku).
## Intended uses & limitations
The model is intended to generate Haikus. To do so, it was trained using a multitask learning approach (see [Caruana 1997](http://www.cs.cornell.edu/~caruana/mlj97.pdf)) with the following four different tasks: :
- topic2graphemes `(keywords = text)`
- topic2phonemes `<keyword_phonemes = text_phonemes>`
- graphemes2phonemes `[text = text_phonemes]`
- phonemes2graphemes `{text_phonemes = text}`
To use the model, use an appropriate prompt like `"(dog rain ="` and let the model generate a Haiku given the keyword.
## Training and evaluation data
We used a collection of existing haikus for training. Furthermore, all haikus were used in their graphemes version as well as a phonemes version. In addition, we extracted key word for all haikus using [KeyBERT](https://github.com/MaartenGr/KeyBERT) and sorted out haikus with a low text quality according to the [GRUEN score](https://github.com/WanzhengZhu/GRUEN).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
rsuwaileh/IDRISI-LMR-HD-TB-partition | a5f82732a1d0644770403fad0f88233ed31d8be5 | 2022-07-18T09:17:11.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | rsuwaileh | null | rsuwaileh/IDRISI-LMR-HD-TB-partition | 9 | null | transformers | 12,684 | This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (only the training data is used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-based LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TL](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition/)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
tauseefr84/distilbert-base-uncased-finetuned-emotion | 507db90411300ea5138671586d7ad3137a6b575f | 2022-06-12T20:52:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | tauseefr84 | null | tauseefr84/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,685 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.838
- name: F1
type: f1
value: 0.822753081351476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5268
- Accuracy: 0.838
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9225 | 1.0 | 250 | 0.5268 | 0.838 | 0.8228 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
course5i/SEAD-L-6_H-256_A-8-mnli | 82c7987b2cac657291d9e1684b43853e45812ca8 | 2022-06-12T22:43:38.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:mnli",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
]
| text-classification | false | course5i | null | course5i/SEAD-L-6_H-256_A-8-mnli | 9 | null | transformers | 12,686 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- mnli
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-256_A-8-mnli
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **mnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_m-accuracy | eval_m-runtime | eval_m-samples_per_second | eval_m-steps_per_second | eval_m-loss | eval_m-samples | eval_mm-accuracy | eval_mm-runtime | eval_mm-samples_per_second | eval_mm-steps_per_second | eval_mm-loss | eval_mm-samples |
|:---------------:|:--------------:|:-------------------------:|:-----------------------:|:-----------:|:--------------:|:----------------:|:---------------:|:--------------------------:|:------------------------:|:------------:|:---------------:|
| 0.8277 | 6.4665 | 1517.828 | 47.476 | 0.6014 | 9815 | 0.8310 | 5.3528 | 1836.786 | 57.54 | 0.5724 | 9832 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
EventMiner/xlm-roberta-large-en-pt-es-doc | 7e3d9465fc56872f5857e77ba2cfa31cdecc9f13 | 2022-06-19T15:22:42.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"multilingual",
"transformers",
"news event detection",
"document level",
"EventMiner",
"license:apache-2.0"
]
| text-classification | false | EventMiner | null | EventMiner/xlm-roberta-large-en-pt-es-doc | 9 | null | transformers | 12,687 | ---
language: multilingual
tags:
- news event detection
- document level
- EventMiner
license: apache-2.0
---
# EventMiner
EventMiner is designed for multilingual news event detection. The goal of news event detection is the automatic extraction of event details from news articles. This event extraction can be done at different levels: document, sentence and word ranging from coarse-granular information to fine-granular information.
We submitted the best results based on EventMiner to [CASE 2021 shared task 1: *Multilingual Protest News Detection*](https://competitions.codalab.org/competitions/31247). Our approach won first place in English for the document level task while ranking within the top four solutions for other languages: Portuguese, Spanish, and Hindi.
*EventMiner/xlm-roberta-large-en-pt-es-doc* is a xlm-roberta-large sequence classification model fine-tuned on English, Portuguese and Spanish document level data of the multilingual version of GLOCON gold standard dataset released with [CASE 2021](https://aclanthology.org/2021.case-1.11/). <br>
Labels:
- Label_0: News article does not contain information about a past or ongoing socio-political event
- Label_1: News article contains information about a past or ongoing socio-political event
More details about the training procedure are available with our [codebase](https://github.com/HHansi/EventMiner).
# How to Use
## Load Model
```python
from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
model_name = 'EventMiner/xlm-roberta-large-en-pt-es-doc'
tokenizer = XLMRobertaTokenizer.from_pretrained(model_name)
model = XLMRobertaForSequenceClassification.from_pretrained(model_name)
```
## Classification
```python
from transformers import pipeline
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
classifier("Police arrested five more student leaders on Monday when implementing the strike call given by MSU students union as a mark of protest against the decision to introduce payment seats in first-year commerce programme.")
```
# Citation
If you use this model, please consider citing the following paper.
```
@inproceedings{hettiarachchi-etal-2021-daai,
title = "{DAAI} at {CASE} 2021 Task 1: Transformer-based Multilingual Socio-political and Crisis Event Detection",
author = "Hettiarachchi, Hansi and
Adedoyin-Olowe, Mariam and
Bhogal, Jagdev and
Gaber, Mohamed Medhat",
booktitle = "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.case-1.16",
doi = "10.18653/v1/2021.case-1.16",
pages = "120--130",
}
``` |
ghadeermobasher/BC5CDR-Chem-Modified-SciBERT-512 | 32ede9744bc23150a93ddd2264539f57aeb93143 | 2022-06-13T23:03:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified-SciBERT-512 | 9 | null | transformers | 12,688 | Entry not found |
Jerimee/autotrain-dontknowwhatImdoing-980432459 | 8ce64189b4e0eca357183a93bb7ba0ac42a55a24 | 2022-06-14T01:36:33.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Jerimee/autotrain-data-dontknowwhatImdoing",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | Jerimee | null | Jerimee/autotrain-dontknowwhatImdoing-980432459 | 9 | 1 | transformers | 12,689 | ---
tags: autotrain
language: en
widget:
- text: "Jerimee"
example_title: "a weird human name"
- text: "Curtastica"
example_title: "a goblin name"
- text: "Fatima"
example_title: "a common human name"
datasets:
- Jerimee/autotrain-data-dontknowwhatImdoing
co2_eq_emissions: 0.012147398577917884
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 980432459
- CO2 Emissions (in grams): 0.012147398577917884
## Validation Metrics
- Loss: 0.0469294898211956
- Accuracy: 0.9917355371900827
- Precision: 0.9936708860759493
- Recall: 0.9936708860759493
- AUC: 0.9990958408679927
- F1: 0.9936708860759493
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Jerimee/autotrain-dontknowwhatImdoing-980432459
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jerimee/autotrain-dontknowwhatImdoing-980432459", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jerimee/autotrain-dontknowwhatImdoing-980432459", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ghadeermobasher/BC5CDR-Chem-Original-SciBERT-384 | f5cddc5ca3bc5a630e0e08602dd0865fa5ca1b71 | 2022-06-14T01:35:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Original-SciBERT-384 | 9 | null | transformers | 12,690 | Entry not found |
olivia371/finetuning-sentiment-model-3000-samples | 02d4d1332d397984b05363471f3aed2968a6568c | 2022-06-14T15:05:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | olivia371 | null | olivia371/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,691 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9253731343283581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2348
- Accuracy: 0.925
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ghadeermobasher/BC5CDR-Chem-Original-PubMedBERT-512 | 0a5396ae4673ef2d9d181f18afdd49f0cbd68038 | 2022-06-15T11:35:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Original-PubMedBERT-512 | 9 | null | transformers | 12,692 | Entry not found |
cookpad/mt5-base-indonesia-recipe-query-generation_v2 | b4ddb0fb466ed531a8af996314a22e8cadb8f782 | 2022-06-16T14:48:45.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | cookpad | null | cookpad/mt5-base-indonesia-recipe-query-generation_v2 | 9 | null | transformers | 12,693 | Entry not found |
EddieChen372/incoder-1B-finetuned-jest | 3b5a09653641c0963b31fedca70e11a55f4dda7e | 2022-06-26T17:44:58.000Z | [
"pytorch",
"xglm",
"text-generation",
"transformers"
]
| text-generation | false | EddieChen372 | null | EddieChen372/incoder-1B-finetuned-jest | 9 | null | transformers | 12,694 | Entry not found |
johntang/finetuning-sentiment-model-3000-samples | 559451ae43523527ba4320593a675cb0296c317f | 2022-07-13T14:02:11.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | johntang | null | johntang/finetuning-sentiment-model-3000-samples | 9 | null | transformers | 12,695 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8786885245901639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3426
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
chandrasutrisnotjhong/distilbert-base-uncased-finetuned-imdb | ad4e869af8cb93253b3ebbed0a50b376743cb8dd | 2022-06-20T04:59:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | chandrasutrisnotjhong | null | chandrasutrisnotjhong/distilbert-base-uncased-finetuned-imdb | 9 | null | transformers | 12,696 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-science-v3-e2-v4-e2-manual | b28f8adb5c0e9b801440c91ac9dcb257cc0067ab | 2022-06-18T18:01:15.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-science-v3-e2-v4-e2-manual | 9 | null | transformers | 12,697 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-science-v3-e2-v4-e2-manual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-science-v3-e2-v4-e2-manual
This model is a fine-tuned version of [theojolliffe/bart-cnn-science-v3-e2](https://huggingface.co/theojolliffe/bart-cnn-science-v3-e2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9189
- Rouge1: 55.982
- Rouge2: 36.9147
- Rougel: 39.1563
- Rougelsum: 53.5959
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 42 | 0.9365 | 53.4332 | 34.0477 | 36.9735 | 51.1918 | 142.0 |
| No log | 2.0 | 84 | 0.9189 | 55.982 | 36.9147 | 39.1563 | 53.5959 | 142.0 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Onlydrinkwater/t5-small-de-en-mt | 908bb3688db73af544531b64dad0135fe1a4cab1 | 2022-06-18T23:25:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Onlydrinkwater | null | Onlydrinkwater/t5-small-de-en-mt | 9 | null | transformers | 12,698 | Entry not found |
amissier/distilbert-amazon-shoe-reviews | fc8bac23745a37e69b6837570ac84c3631436a11 | 2022-06-19T08:02:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | amissier | null | amissier/distilbert-amazon-shoe-reviews | 9 | null | transformers | 12,699 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-amazon-shoe-reviews
results:
- task:
type: text-classification
name: Text Classification
dataset:
type: amazon_us_reviews
name: Amazon US reviews
split: Shoes
metrics:
- type: accuracy
value: 0.48
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-amazon-shoe-reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3445
- Accuracy: 0.48
- F1: [0. 0. 0. 0. 0.64864865]
- Precision: [0. 0. 0. 0. 0.48]
- Recall: [0. 0. 0. 0. 1.]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------:|:----------------:|
| No log | 1.0 | 15 | 1.3445 | 0.48 | [0. 0. 0. 0. 0.64864865] | [0. 0. 0. 0. 0.48] | [0. 0. 0. 0. 1.] |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.