modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ScandinavianMrT/gpt2_prefinetune_IMDB | 2455457a8733f6133a7534f6e653310f6c1f19c3 | 2022-03-16T19:05:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | ScandinavianMrT | null | ScandinavianMrT/gpt2_prefinetune_IMDB | 6 | null | transformers | 15,500 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_prefinetune_IMDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_prefinetune_IMDB
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7838 | 1.0 | 2997 | 3.6875 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
triet1102/bert-base-cased-GoogleRE-masked-subj-obj | 90ceb4b9ca0f0074fcf1dcb55b7f3a8c7fc31659 | 2022-03-17T16:28:16.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | triet1102 | null | triet1102/bert-base-cased-GoogleRE-masked-subj-obj | 6 | null | transformers | 15,501 | Entry not found |
cammy/led-large-16384-arxiv-100-MDS | a9e2f0c6502c64d9e6266ad0b86840c2528d161c | 2022-03-17T19:09:17.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | cammy | null | cammy/led-large-16384-arxiv-100-MDS | 6 | null | transformers | 15,502 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-large-16384-arxiv-100-MDS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-large-16384-arxiv-100-MDS
This model is a fine-tuned version of [allenai/led-large-16384-arxiv](https://huggingface.co/allenai/led-large-16384-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3897
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 512.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.1144 | 13.2756 | 2.6204 | 9.2686 | 10.2289 | 184.0 |
| No log | 2.0 | 50 | 3.3897 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
gagan3012/TrOCR-Ar | 74a112bcb8365de3bca502091397c516b7c0fe9d | 2022-03-20T22:11:39.000Z | [
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"transformers"
]
| null | false | gagan3012 | null | gagan3012/TrOCR-Ar | 6 | null | transformers | 15,503 | Entry not found |
rahulacj/mbart-large-cc25-finetuned-hi-to-en | 276db0b0d8d9848e30d86cda8aead3441abaeab2 | 2022-03-26T14:06:02.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | rahulacj | null | rahulacj/mbart-large-cc25-finetuned-hi-to-en | 6 | null | transformers | 15,504 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-large-cc25-finetuned-hi-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-hi-to-en
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4710
- Bleu: 16.6154
- Gen Len: 42.6244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.5705 | 1.0 | 3955 | 1.4858 | 14.8984 | 47.6759 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/regnet-y-640-seer | 781d316c0203101f717a74eea442047576c2a87c | 2022-03-31T12:12:50.000Z | [
"pytorch",
"regnet",
"feature-extraction",
"arxiv:2202.08360",
"transformers",
"vision",
"license:apache-2.0"
]
| feature-extraction | false | facebook | null | facebook/regnet-y-640-seer | 6 | null | transformers | 15,505 | ---
license: apache-2.0
tags:
- vision
widgets:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNetModel
RegNetModel model was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER).
Disclaimer: The team releasing RegNetModel did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on bilion of random images from the internet

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetModel.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 1088, 7, 7]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
EMBO/sd-geneprod-roles | a0eface476eded414f40ad2876db49df45bb19cf | 2022-03-27T13:23:03.000Z | [
"pytorch",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-nlp",
"transformers",
"token classification",
"license:agpl-3.0",
"autotrain_compatible"
]
| token-classification | false | EMBO | null | EMBO/sd-geneprod-roles | 6 | null | transformers | 15,506 | ---
language:
- english
thumbnail:
tags:
- token classification
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-geneprod-roles
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `GENEPROD_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is to infer the semantic role of gene products (genes and proteins) with regard to the causal hypotheses tested in experiments reported in scientific papers.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-geneprod-roles')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBL/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: GENEPROD_ROLES
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
- Epochs: 0.9
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On 7178 example of test set with `sklearn.metrics`:
```
precision recall f1-score support
CONTROLLED_VAR 0.81 0.86 0.83 7835
MEASURED_VAR 0.82 0.85 0.84 9330
micro avg 0.82 0.85 0.83 17165
macro avg 0.82 0.85 0.83 17165
weighted avg 0.82 0.85 0.83 17165
{'test_loss': 0.03846803680062294, 'test_accuracy_score': 0.9854472664459946, 'test_precision': 0.8156312625250501, 'test_recall': 0.8535974366443344, 'test_f1': 0.8341825841897008, 'test_runtime': 58.7369, 'test_samples_per_second': 122.206, 'test_steps_per_second': 1.924}
```
|
Ketzu/koelectra-sts-v0.5 | e85f9d260e26396aa0aa3d4f66c4ea5fa025abbb | 2022-03-19T22:19:46.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Ketzu | null | Ketzu/koelectra-sts-v0.5 | 6 | null | transformers | 15,507 | ---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: koelectra-sts-v0.5
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Spearmanr
type: spearmanr
value: 0.87026647480689
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-sts-v0.5
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0213
- Pearson: 0.9958
- Spearmanr: 0.8703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|
| 0.058 | 1.0 | 6250 | 0.0428 | 0.9915 | 0.8702 |
| 0.0433 | 2.0 | 12500 | 0.0448 | 0.9911 | 0.8685 |
| 0.0362 | 3.0 | 18750 | 0.0261 | 0.9950 | 0.8705 |
| 0.0107 | 4.0 | 25000 | 0.0234 | 0.9953 | 0.8702 |
| 0.0075 | 5.0 | 31250 | 0.0213 | 0.9958 | 0.8703 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Aleksandar1932/gpt-neo-125M-rock | dab2957f1540288eda109c5685da444006ddaf94 | 2022-03-19T14:55:53.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt-neo-125M-rock | 6 | null | transformers | 15,508 | Entry not found |
xyfigo/distilbert-base-uncased-finetuned-emotion | 0d33d1944d30a6f621ff82cfb98042d87283b23f | 2022-03-19T15:30:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | xyfigo | null | xyfigo/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,509 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9281714323715586
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2286
- Accuracy: 0.928
- F1: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8579 | 1.0 | 250 | 0.3272 | 0.903 | 0.9008 |
| 0.2543 | 2.0 | 500 | 0.2286 | 0.928 | 0.9282 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
axiomepic/nethack-gpt2 | 912d9d97c81ea99c23056724e534fe952fc7313f | 2022-03-22T22:36:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | axiomepic | null | axiomepic/nethack-gpt2 | 6 | null | transformers | 15,510 | Entry not found |
DeltaHub/QuestionTopic_T5-large_Compacter | f8bf680afedfcbd65627d0aa71249b4dd3164635 | 2022-03-20T01:13:49.000Z | [
"pytorch",
"transformers"
]
| null | false | DeltaHub | null | DeltaHub/QuestionTopic_T5-large_Compacter | 6 | null | transformers | 15,511 | Entry not found |
aytugkaya/distilbert-base-uncased-finetuned-clinc | c3e640ee46391a45db23da616fc666993d0df00e | 2022-03-20T22:21:56.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aytugkaya | null | aytugkaya/distilbert-base-uncased-finetuned-clinc | 6 | null | transformers | 15,512 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9148387096774193
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7760
- Accuracy: 0.9148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2994 | 1.0 | 318 | 3.3016 | 0.7442 |
| 2.6387 | 2.0 | 636 | 1.8892 | 0.8339 |
| 1.5535 | 3.0 | 954 | 1.1602 | 0.8948 |
| 1.0139 | 4.0 | 1272 | 0.8619 | 0.9084 |
| 0.7936 | 5.0 | 1590 | 0.7760 | 0.9148 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-roberta-regularisation | 5ac9d3ef94f407de4be9ca68d21d7d6e61d5d0c8 | 2022-03-22T09:45:09.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-roberta-regularisation | 6 | null | transformers | 15,513 | Entry not found |
EALeon16/results | af86e88617001c1f3fb1985b2fe3711d8426d540 | 2022-03-22T04:38:17.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | EALeon16 | null | EALeon16/results | 6 | null | transformers | 15,514 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9229
- Accuracy: 0.7586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9119 | 1.0 | 258 | 0.8750 | 0.7241 |
| 0.8307 | 2.0 | 516 | 0.9229 | 0.7586 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Yaxin/xlm-roberta-base-conll2003-ner | 2b40f04ec7598c3744eb95c17a52f0d1200cb4e3 | 2022-03-22T08:11:52.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Yaxin | null | Yaxin/xlm-roberta-base-conll2003-ner | 6 | null | transformers | 15,515 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test-conll2003-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9459188783174762
- name: Recall
type: recall
value: 0.9537192864355436
- name: F1
type: f1
value: 0.94980306712478
- name: Accuracy
type: accuracy
value: 0.9911218410498034
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-conll2003-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0470
- Precision: 0.9459
- Recall: 0.9537
- F1: 0.9498
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/led-base-16384-100-MDS | 49ffc487d915d726c10481a8a1c196917059fede | 2022-03-23T06:55:50.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | cammy | null | cammy/led-base-16384-100-MDS | 6 | null | transformers | 15,516 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-100-MDS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-base-16384-100-MDS
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1425
- Rouge1: 16.7324
- Rouge2: 5.8501
- Rougel: 13.908
- Rougelsum: 13.8469
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.6187 | 15.1426 | 4.2468 | 13.4488 | 13.38 | 20.0 |
| No log | 2.0 | 50 | 3.9873 | 13.4341 | 3.3283 | 10.2739 | 10.8229 | 20.0 |
| No log | 3.0 | 75 | 4.0264 | 18.1891 | 5.3395 | 15.0797 | 15.3586 | 20.0 |
| No log | 4.0 | 100 | 4.0929 | 17.0091 | 5.5336 | 14.4381 | 14.5149 | 19.5 |
| No log | 5.0 | 125 | 4.1425 | 16.7324 | 5.8501 | 13.908 | 13.8469 | 20.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Deep1994/t5-paraphrase-quora | 12c15c9b803bac3007e92f1fb8ffc09c1c193d73 | 2022-03-24T18:12:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| text2text-generation | false | Deep1994 | null | Deep1994/t5-paraphrase-quora | 6 | 1 | transformers | 15,517 | ---
license: afl-3.0
---
## Model description
T5 Model for generating paraphrases of english sentences. Trained on the [Quora Paraphrase dataset](https://www.kaggle.com/c/quora-question-pairs).
## Online demo website
Click [https://huggingface.co/spaces/Deep1994/t5-paraphrase](https://huggingface.co/spaces/Deep1994/t5-paraphrase) to have a try online.
## How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
def set_seed(seed):
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(1234)
model = T5ForConditionalGeneration.from_pretrained('Deep1994/t5-paraphrase-quora')
tokenizer = T5Tokenizer.from_pretrained('Deep1994/t5-paraphrase-quora')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
sentence = "What is the best comedy TV serial/series?"
text = "paraphrase: " + sentence
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
# top k/ top p sampling
beam_outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
do_sample=True,
max_length=20,
top_k=50,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
# beam search
# beam_outputs = model.generate(
# input_ids=input_ids,
# attention_mask=attention_masks,
# max_length=20,
# num_beams=5,
# no_repeat_ngram_size=2,
# num_return_sequences=5,
# early_stopping=True
# )
print ("\nOriginal Question: ")
print (sentence)
print ("\n")
print ("Paraphrased Questions: ")
final_outputs = []
for beam_output in beam_outputs:
sent = tokenizer.decode(beam_output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
if sent.lower() != sentence.lower() and sent not in final_outputs:
final_outputs.append(sent)
for i, final_output in enumerate(final_outputs):
print("{}: {}".format(i, final_output))
```
```
Original Question:
What is the best comedy TV serial/series?
Beam search:
0: What is the best comedy TV series?
1: What are some of the best comedy TV series?
2: Which is the best comedy TV series?
3: What are the best comedy TV series?
4: What are some of the best comedy TV shows?
Top k/ Top p sampling:
0: What are some of the best comedy TV dramas?
1: What are the best comedy TV series or series?
2: What are the best comedy television serials?
3: What is the best comedy series?
4: Which are some best comedy TV series series?
```
For more reference on training your own T5 model, do check out [t5-paraphrase-generation](https://github.com/Deep1994/t5-paraphrase-generation). |
apoorvumang/kgt5-base-wikikg90mv2 | c9e8bd16bcfa969f8813761c19e0e1e998ab36bf | 2022-03-23T15:02:38.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | apoorvumang | null | apoorvumang/kgt5-base-wikikg90mv2 | 6 | null | transformers | 15,518 | ---
license: mit
widget:
- text: "Apoorv Umang Saxena| family name"
example_title: "Family name prediction"
- text: "Apoorv Saxena| country"
example_title: "Country prediction"
- text: "World War 2| followed by"
example_title: "followed by"
---
This is a t5-base model (init from pretrained weights) and finetuned on WikiKG90Mv2 dataset. Please see https://github.com/apoorvumang/kgt5/ for more details on the method.
This model was trained on the tail entity prediction task ie. given subject entity and relation, predict the object entity. Input should be provided in the form of "\<entity text\>| \<relation text\>".
We used the raw text title and descriptions to get entity and relation textual representations. These raw texts were obtained from ogb dataset itself (dataset/wikikg90m-v2/mapping/entity.csv and relation.csv). Entity representation was set to the title, and description was used to disambiguate if 2 entities had the same title. If still no disambiguation was possible, we used the wikidata ID (eg. Q123456).
We trained the model on WikiKG90Mv2 for approx 1.5 epochs on 4x1080Ti GPUs. The training time for 1 epoch was approx 5.5 days.
To evaluate the model, we sample 300 times from the decoder for each input (s,r) pair. We then remove predictions which do not map back to a valid entity, and then rank the predictions by their log probabilities. Filtering was performed subsequently. **We achieve 0.239 validation MRR** (the full leaderboard is here https://ogb.stanford.edu/docs/lsc/leaderboards/#wikikg90mv2)
You can try the following code in an ipython notebook to evaluate the pre-trained model. The full procedure of mapping entity to ids, filtering etc. is not included here for sake of simplicity but can be provided on request if needed. Please contact Apoorv ([email protected]) for clarifications/details.
---------
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2")
model = AutoModelForSeq2SeqLM.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2")
```
```
import torch
def getScores(ids, scores, pad_token_id):
"""get sequence scores from model.generate output"""
scores = torch.stack(scores, dim=1)
log_probs = torch.log_softmax(scores, dim=2)
# remove start token
ids = ids[:,1:]
# gather needed probs
x = ids.unsqueeze(-1).expand(log_probs.shape)
needed_logits = torch.gather(log_probs, 2, x)
final_logits = needed_logits[:, :, 0]
padded_mask = (ids == pad_token_id)
final_logits[padded_mask] = 0
final_scores = final_logits.sum(dim=-1)
return final_scores.cpu().detach().numpy()
def topkSample(input, model, tokenizer,
num_samples=5,
num_beams=1,
max_output_length=30):
tokenized = tokenizer(input, return_tensors="pt")
out = model.generate(**tokenized,
do_sample=True,
num_return_sequences = num_samples,
num_beams = num_beams,
eos_token_id = tokenizer.eos_token_id,
pad_token_id = tokenizer.pad_token_id,
output_scores = True,
return_dict_in_generate=True,
max_length=max_output_length,)
out_tokens = out.sequences
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
out_scores = getScores(out_tokens, out.scores, tokenizer.pad_token_id)
pair_list = [(x[0], x[1]) for x in zip(out_str, out_scores)]
sorted_pair_list = sorted(pair_list, key=lambda x:x[1], reverse=True)
return sorted_pair_list
def greedyPredict(input, model, tokenizer):
input_ids = tokenizer([input], return_tensors="pt").input_ids
out_tokens = model.generate(input_ids)
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
return out_str[0]
```
```
# an example from validation set that the model predicts correctly
# you can try your own examples here. what's your noble title?
input = "Sophie Valdemarsdottir| noble title"
out = topkSample(input, model, tokenizer, num_samples=5)
out
```
You can further load the list of entity aliases, then filter only those predictions which are valid entities then create a reverse mapping from alias -> integer id to get final predictions in required format.
However, loading these aliases in memory as a dictionary requires a lot of RAM + you need to download the aliases file (made available here https://storage.googleapis.com/kgt5-wikikg90mv2/ent_alias_list.pickle) (relation file: https://storage.googleapis.com/kgt5-wikikg90mv2/rel_alias_list.pickle)
The submitted validation/test results for were obtained by sampling 300 times for each input, then applying above procedure, followed by filtering known entities. The final MRR can vary slightly due to this sampling nature (we found that although beam search gives deterministic output, the results are inferior to sampling large number of times).
```
# download valid.txt. you can also try same url with test.txt. however test does not contain the correct tails
!wget https://storage.googleapis.com/kgt5-wikikg90mv2/valid.txt
```
```
fname = 'valid.txt'
valid_lines = []
f = open(fname)
for line in f:
valid_lines.append(line.rstrip())
f.close()
print(valid_lines[0])
```
```
from tqdm.auto import tqdm
# try unfiltered hits@k. this is approximation since model can sample same seq multiple times
# you should run this on gpu if you want to evaluate on all points with 300 samples each
k = 1
count_at_k = 0
max_predictions = k
max_points = 1000
for line in tqdm(valid_lines[:max_points]):
input, target = line.split('\t')
model_output = topkSample(input, model, tokenizer, num_samples=max_predictions)
prediction_strings = [x[0] for x in model_output]
if target in prediction_strings:
count_at_k += 1
print('Hits at {0} unfiltered: {1}'.format(k, count_at_k/max_points))
``` |
VincentC12/rh_classification_kara | 506cedb962d1ea41edd000cbda3f844099d3ffd4 | 2022-03-28T11:53:41.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"sentiment-analysis"
]
| text-classification | false | VincentC12 | null | VincentC12/rh_classification_kara | 6 | null | pytorch | 15,519 | ---
language:
- en
library_name: pytorch
metrics:
- satisfaction
- culture organisationnelle
- leadership
- conditions de travail
tags:
- sentiment-analysis
widget:
- text: "My work is recognized by my superiors and I would even say that I feel like I have more recognition since we are on telework."
example_title: "Exemple leadership"
- text: "For Working conditions and wages in particular."
example_title: "Exemple conditions de travail"
- text: "A climate of overperformance is in place in the company."
example_title: "Exemple culture organisationnelle"
- text: "With regard to telework, I look forward to setting up the hybrid week, so 2 3 days at home and at the office."
example_title: "Exemple satisfaction"
---
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil de classification thématique des commentaires RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Satisfaction
- Label_1 = Culture Organisationnelle
- Label_2 = Leadership
- Label_3 = Conditions de travail
version 0.0.1
Performances sur le jeux de données du HRM : 84.3% de précision |
tartuNLP/liv4ever-hugging-mt | f27e017f9aa81d0cf0166c8ce62cea16f2dd6e56 | 2022-03-24T07:33:01.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | tartuNLP | null | tartuNLP/liv4ever-hugging-mt | 6 | null | transformers | 15,520 | ---
license: apache-2.0
tags:
- translation
widget:
- text: "<2li> Let us generate some Livonian text!"
--- |
buvnswrn/daml-t5-pretrain | 9ad8adf3bdd98a309afc6a883b2c61aba0496917 | 2022-03-24T09:08:34.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:imdb",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| translation | false | buvnswrn | null | buvnswrn/daml-t5-pretrain | 6 | null | transformers | 15,521 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- imdb
model-index:
- name: daml-t5-pretrain-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daml-t5-pretrain-imdb
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
rurupang/roberta-base-finetuned-sts-f1_ | 2f3d249a3150e573b3926b817a4522400795d747 | 2022-03-24T08:38:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | rurupang | null | rurupang/roberta-base-finetuned-sts-f1_ | 6 | null | transformers | 15,522 | Entry not found |
LeonLi279/DialoGPT-small-harrypotter | af0e8c691a32154cdff6c8417f4ff5273fc2c163 | 2022-03-24T12:47:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | LeonLi279 | null | LeonLi279/DialoGPT-small-harrypotter | 6 | null | transformers | 15,523 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
buvnswrn/daml-t5-pretrain-imdb-accelerate | 8ab023243ef07ade0c92a0bfd98309ac87c856fa | 2022-03-24T11:22:52.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:imdb",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| translation | false | buvnswrn | null | buvnswrn/daml-t5-pretrain-imdb-accelerate | 6 | null | transformers | 15,524 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- imdb
model-index:
- name: daml-t5-pretrain-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daml-t5-pretrain-imdb
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-tc-big-en-zle | fdc4126357f33b77c9f62582cdb1ed2ca7c4d13c | 2022-06-01T13:08:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"en",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-zle | 6 | null | transformers | 15,525 | ---
language:
- be
- en
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-zle
results:
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: flores101-devtest
type: flores_101
args: eng rus devtest
metrics:
- name: BLEU
type: bleu
value: 32.7
- task:
name: Translation eng-ukr
type: translation
args: eng-ukr
dataset:
name: flores101-devtest
type: flores_101
args: eng ukr devtest
metrics:
- name: BLEU
type: bleu
value: 32.1
- task:
name: Translation eng-bel
type: translation
args: eng-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-bel
metrics:
- name: BLEU
type: bleu
value: 24.9
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 45.5
- task:
name: Translation eng-ukr
type: translation
args: eng-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-ukr
metrics:
- name: BLEU
type: bleu
value: 37.7
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: tico19-test
type: tico19-test
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 33.7
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2012
type: wmt-2012-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 36.8
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2013
type: wmt-2013-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 26.9
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2014
type: wmt-2014-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 43.5
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2015
type: wmt-2015-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 34.9
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2016
type: wmt-2016-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 33.1
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2017
type: wmt-2017-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 37.3
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2018
type: wmt-2018-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 32.9
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2019
type: wmt-2019-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 31.8
- task:
name: Translation eng-rus
type: translation
args: eng-rus
dataset:
name: newstest2020
type: wmt-2020-news
args: eng-rus
metrics:
- name: BLEU
type: bleu
value: 25.5
---
# opus-mt-tc-big-en-zle
Neural machine translation model for translating from English (en) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Are they coming as well?",
">>rus<< I didn't let Tom do what he wanted to do."
]
model_name = "pytorch-models/opus-mt-tc-big-en-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Они тоже приедут?
# Я не позволил Тому сделать то, что он хотел.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-zle")
print(pipe(">>rus<< Are they coming as well?"))
# expected output: Они тоже приедут?
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-bel | tatoeba-test-v2021-08-07 | 0.50345 | 24.9 | 2500 | 16237 |
| eng-rus | tatoeba-test-v2021-08-07 | 0.66182 | 45.5 | 19425 | 134296 |
| eng-ukr | tatoeba-test-v2021-08-07 | 0.60175 | 37.7 | 13127 | 80998 |
| eng-bel | flores101-devtest | 0.42078 | 11.2 | 1012 | 24829 |
| eng-rus | flores101-devtest | 0.59654 | 32.7 | 1012 | 23295 |
| eng-ukr | flores101-devtest | 0.60131 | 32.1 | 1012 | 22810 |
| eng-rus | newstest2012 | 0.62842 | 36.8 | 3003 | 64790 |
| eng-rus | newstest2013 | 0.54627 | 26.9 | 3000 | 58560 |
| eng-rus | newstest2014 | 0.68348 | 43.5 | 3003 | 61603 |
| eng-rus | newstest2015 | 0.62621 | 34.9 | 2818 | 55915 |
| eng-rus | newstest2016 | 0.60595 | 33.1 | 2998 | 62014 |
| eng-rus | newstest2017 | 0.64249 | 37.3 | 3001 | 60253 |
| eng-rus | newstest2018 | 0.61219 | 32.9 | 3000 | 61907 |
| eng-rus | newstest2019 | 0.57902 | 31.8 | 1997 | 48147 |
| eng-rus | newstest2020 | 0.52939 | 25.5 | 2002 | 47083 |
| eng-rus | tico19-test | 0.59314 | 33.7 | 2100 | 55843 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 01:58:40 EET 2022
* port machine: LM0-400-22516.local
|
agdsga/chinese-electra-large-discriminator-finetuned-ner-1 | 7c080290c3f244c505ad28212d170ea1d3d2dda8 | 2022-03-27T00:36:50.000Z | [
"pytorch",
"tensorboard",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | agdsga | null | agdsga/chinese-electra-large-discriminator-finetuned-ner-1 | 6 | null | transformers | 15,526 | Entry not found |
jasonyim2/distilbert-base-uncased-finetuned-emotion | 59732583b004c447cc3110d10928efb766d713a3 | 2022-03-27T05:00:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jasonyim2 | null | jasonyim2/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,527 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246345608107297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2166
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8132 | 1.0 | 250 | 0.3117 | 0.902 | 0.8990 |
| 0.2419 | 2.0 | 500 | 0.2166 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dannyvas23/electricidad-small-discriminator-finetuned-clasificacion-texto-suicida | 8dde6f617d19d701a5b5245d8bc375671f5f3bd8 | 2022-03-26T19:22:14.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"es",
"transformers",
"generated_from_trainer",
"sentiment",
"emotion",
"license:afl-3.0",
"model-index"
]
| text-classification | false | dannyvas23 | null | dannyvas23/electricidad-small-discriminator-finetuned-clasificacion-texto-suicida | 6 | 1 | transformers | 15,528 | ---
license: afl-3.0
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "La vida no merece la pena"
example_title: "Ejemplo 1"
- text: "Para vivir así lo mejor es estar muerto"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
- text: "Quiero terminar con todo"
example_title: "Ejemplo 4"
- text: "Disfruto de la vista"
example_title: "Ejemplo 5"
metrics:
- accuracy
model-index:
- name: electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0458
- Accuracy: 0.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.161100 | 1.0 | 0.133057 | 0.952718 |
| 0.134500 | 2.0 | 0.110966 | 0.960804 |
| 0.108500 | 3.0 | 0.086417 | 0.970835 |
| 0.099400 | 4.0 | 0.073618 | 0.974856 |
| 0.090500 | 5.0 | 0.065231 | 0.979629 |
| 0.080700 | 6.0 | 0.060849 | 0.982324 |
| 0.069200 | 7.0 | 0.054718 | 0.986125 |
| 0.060400 | 8.0 | 0.051153 | 0.985948 |
| 0.048200 | 9.0 | 0.045747 | 0.989748 |
| 0.045500 | 10.0 | 0.049992 | 0.988069 |
| 0.043400 | 11.0 | 0.046325 | 0.990234 |
| 0.034300 | 12.0 | 0.050746 | 0.989792 |
| 0.032900 | 13.0 | 0.043434 | 0.991737 |
| 0.028400 | 14.0 | 0.045003 | 0.991869 |
| 0.022300 | 15.0 | 0.045819 | 0.991648 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dennishe97/longformer-code-4096 | 6fc4784f957d673cba45875f60617ba466fb6e91 | 2022-03-31T23:53:20.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"transformers"
]
| feature-extraction | false | dennishe97 | null | dennishe97/longformer-code-4096 | 6 | null | transformers | 15,529 | Entry not found |
mikeadimech/punctuation-test-4 | 2c433a884865b3e72e2ee0ace7f76a7732285231 | 2022-03-28T15:09:06.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mikeadimech | null | mikeadimech/punctuation-test-4 | 6 | null | transformers | 15,530 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: punctuation-test-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 39.1294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# punctuation-test-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3411
- Bleu: 39.1294
- Gen Len: 18.4812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.3331 | 1.0 | 625 | 0.3411 | 39.1294 | 18.4812 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
princeton-nlp/CoFi-MNLI-s60 | ab13a33db799d2f2657634a67de34c8298be0e79 | 2022-05-01T01:20:27.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
]
| text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-MNLI-s60 | 6 | null | transformers | 15,531 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset MNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
Cheatham/xlm-roberta-large-finetuned-d1-001 | 5c692e0507f0eb35d8f6d0a7c4e4b32961446572 | 2022-03-30T13:50:06.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d1-001 | 6 | null | transformers | 15,532 | Entry not found |
vlsb/autotrain-security-texts-classification-roberta-688020754 | 75f05982d81e812eefa99ceca8c31271f14f6456 | 2022-03-30T20:55:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:vlsb/autotrain-data-security-texts-classification-roberta",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | vlsb | null | vlsb/autotrain-security-texts-classification-roberta-688020754 | 6 | null | transformers | 15,533 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vlsb/autotrain-data-security-texts-classification-roberta
co2_eq_emissions: 3.1151249696839685
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 688020754
- CO2 Emissions (in grams): 3.1151249696839685
## Validation Metrics
- Loss: 0.2810373902320862
- Accuracy: 0.8928571428571429
- Precision: 0.9272727272727272
- Recall: 0.8869565217391304
- AUC: 0.9500805152979066
- F1: 0.9066666666666666
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-texts-classification-roberta-688020754
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-texts-classification-roberta-688020754", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-texts-classification-roberta-688020754", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
vlsb/autotrain-security-text-classification-albert-688320769 | aaefa9583ebde94720301cc94ca405c7356ba81e | 2022-03-30T20:59:32.000Z | [
"pytorch",
"albert",
"text-classification",
"unk",
"dataset:vlsb/autotrain-data-security-text-classification-albert",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | vlsb | null | vlsb/autotrain-security-text-classification-albert-688320769 | 6 | null | transformers | 15,534 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vlsb/autotrain-data-security-text-classification-albert
co2_eq_emissions: 3.670416179055797
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 688320769
- CO2 Emissions (in grams): 3.670416179055797
## Validation Metrics
- Loss: 0.3046899139881134
- Accuracy: 0.8826530612244898
- Precision: 0.9181818181818182
- Recall: 0.8782608695652174
- AUC: 0.9423510466988727
- F1: 0.8977777777777778
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-text-classification-albert-688320769
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
rchiang/ingredients-parser | 5852d228acbb291b28056bf2c1c4ac9e3508b959 | 2022-03-30T23:16:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | rchiang | null | rchiang/ingredients-parser | 6 | null | transformers | 15,535 | Entry not found |
israel/fake-news-classification | e6251fd6781ee2fd86233797b0ce542985697866 | 2022-03-31T21:03:49.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | israel | null | israel/fake-news-classification | 6 | null | transformers | 15,536 | ---
license: mit
---
# Fake and real news classification task
Model : [DistilRoBERTa base model](https://huggingface.co/distilroberta-base)
Dataset : [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
|
DMetaSoul/sbert-chinese-dtm-domain-v1-distill | 38e6603e8823bf68c95d6c6b78c7464c1fcf05fe | 2022-04-02T09:32:44.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
]
| sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-dtm-domain-v1-distill | 6 | null | sentence-transformers | 15,537 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-dtm-domain-v1-distill
此模型是之前[开源对话匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-dtm-domain-v1)的蒸馏版本(仅4层 BERT),适用于**开放领域的对话匹配**场景(偏口语化),比如:
- 哪有好玩的 VS. 这附近有什么好玩的地方
- 定时25分钟 VS. 计时半个小时
- 我要听王琦的歌 VS. 放一首王琦的歌
离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 4% 左右(具体结果详见下文评估小节)。
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-dtm-domain-v1-distill')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-dtm-domain-v1-distill')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-dtm-domain-v1-distill')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
这里主要跟蒸馏前对应的 teacher 模型作了对比:
*性能*
| | Teacher | Student | Gap |
| ---------- | --------------------- | ------------------- | ----- |
| Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x |
| Cost | 24s | 12s | -50% |
| Latency | 39ms | 19ms | -51% |
| Throughput | 407 sentence/s | 815 sentence/s | 2.0x |
*精度*
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** |
| -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- |
| **Teacher** | 78.35% | 74.45% | 32.17% | 75.95% | 44.00% | 14.50% | 66.84% | 55.17% |
| **Student** | 77.99% | 73.95% | 27.20% | 67.49% | 43.90% | 10.79% | 58.21% | 51.36% |
| **Gap** (abs.) | - | - | - | - | - | - | - | -3.81% |
*基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256*
## Citing & Authors
E-mail: [email protected] |
vicl/distilbert-base-uncased-finetuned-cola | 11d53edcdf22eaff4c23159139a00090246760a7 | 2022-04-02T20:16:28.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | vicl | null | vicl/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 15,538 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5598704865754364
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- Matthews Correlation: 0.5599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5223 | 1.0 | 535 | 0.5444 | 0.4309 |
| 0.3457 | 2.0 | 1070 | 0.5213 | 0.5021 |
| 0.2351 | 3.0 | 1605 | 0.6793 | 0.5234 |
| 0.1693 | 4.0 | 2140 | 0.7587 | 0.5527 |
| 0.1301 | 5.0 | 2675 | 0.8697 | 0.5599 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
fangyuan/lfqa_role_classification | b315137e3f64095015e9fe76903d1d62814e0dce | 2022-05-19T20:21:02.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
]
| text2text-generation | false | fangyuan | null | fangyuan/lfqa_role_classification | 6 | null | transformers | 15,539 | ---
license: cc-by-nc-sa-4.0
---
|
thomasdehaene/xlm-roberta-base-nl-emoji-ner | be5ef15de7d93ada8eb5557abf1e74520b273b06 | 2022-04-03T06:32:34.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | thomasdehaene | null | thomasdehaene/xlm-roberta-base-nl-emoji-ner | 6 | null | transformers | 15,540 | Entry not found |
moshew/bert-tiny-emotion-distilled | 9907e5ec8405a1d5b1bc041e844d9e7c126ed413 | 2022-04-03T19:08:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | moshew | null | moshew/bert-tiny-emotion-distilled | 6 | null | transformers | 15,541 | Entry not found |
AnnaBabaie/ms-marco-MiniLM-L-12-v2-news | 3bae90b947e02d57044f44ff5bd6d7bc3e4d63dd | 2022-04-03T13:46:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnnaBabaie | null | AnnaBabaie/ms-marco-MiniLM-L-12-v2-news | 6 | null | transformers | 15,542 | This model is fined tuned for the Fake news classifier: Train a text classification model to detect fake news articles. Base on the Kaggle dataset(https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset).
|
Yaxin/roberta-large-ernie2-skep-en | 99e6bb5a0c565f6ff4f3428ea3d89d87437ad9af | 2022-04-04T07:18:20.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Yaxin | null | Yaxin/roberta-large-ernie2-skep-en | 6 | null | transformers | 15,543 | ---
language: en
---
# SKEP-Roberta
## Introduction
SKEP (SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis) is proposed by Baidu in 2020,
SKEP propose Sentiment Knowledge Enhanced Pre-training for sentiment analysis. Sentiment masking and three sentiment pre-training objectives are designed to incorporate various types of knowledge for pre-training model.
More detail: https://aclanthology.org/2020.acl-main.374.pdf
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|skep-roberta-large| English |Layer:24, Hidden:1024, Heads:24|
This released pytorch model is converted from the officially released PaddlePaddle SKEP model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle SKEP repo:
1. https://github.com/PaddlePaddle/PaddleNLP/blob/develop/paddlenlp/transformers/skep
2. https://github.com/baidu/Senta
- Pytorch Conversion repo: Not released yet
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Yaxin/roberta-large-ernie2-skep-en")
model = AutoModel.from_pretrained("Yaxin/roberta-large-ernie2-skep-en")
```
```
#!/usr/bin/env python
#encoding: utf-8
import torch
from transformers import RobertaTokenizer, RobertaForMaskedLM
tokenizer = RobertaTokenizer.from_pretrained('Yaxin/roberta-large-ernie2-skep-en')
input_tx = "<s> He like play with student, so he became a <mask> after graduation </s>"
# input_tx = "<s> He is a <mask> and likes to get along with his students </s>"
tokenized_text = tokenizer.tokenize(input_tx)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([[0] * len(tokenized_text)])
model = RobertaForMaskedLM.from_pretrained('Yaxin/roberta-large-ernie2-skep-en')
model.eval()
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0]
predicted_index = [torch.argmax(predictions[0, i]).item() for i in range(0, (len(tokenized_text) - 1))]
predicted_token = [tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in
range(1, (len(tokenized_text) - 1))]
print('Predicted token is:', predicted_token)
```
## Citation
```bibtex
@article{tian2020skep,
title={SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis},
author={Tian, Hao and Gao, Can and Xiao, Xinyan and Liu, Hao and He, Bolei and Wu, Hua and Wang, Haifeng and Wu, Feng},
journal={arXiv preprint arXiv:2005.05635},
year={2020}
}
```
```
reference:
https://github.com/nghuyong/ERNIE-Pytorch
``` |
LeBenchmark/wav2vec-FR-1K-Female-base | 7b4c466f1bbf78ec22e89ef90f24dfc372733901 | 2022-05-11T09:22:54.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2204.01397",
"transformers",
"license:apache-2.0"
]
| null | false | LeBenchmark | null | LeBenchmark/wav2vec-FR-1K-Female-base | 6 | null | transformers | 15,544 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems](https://arxiv.org/abs/2204.01397)
## Model and data descriptions
We release four gender-specific models trained on 1K hours of speech.
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Referencing our gender-specific models
```
@article{boito2022study,
title={A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems},
author={Marcely Zanon Boito and Laurent Besacier and Natalia Tomashenko and Yannick Est{\`e}ve},
journal={arXiv preprint arXiv:2204.01397},
year={2022}
}
```
## Referencing LeBenchmark
```
@inproceedings{evain2021task,
title={Task agnostic and task specific self-supervised learning from speech with \textit{LeBenchmark}},
author={Evain, Sol{\`e}ne and Nguyen, Ha and Le, Hang and Boito, Marcely Zanon and Mdhaffar, Salima and Alisamir, Sina and Tong, Ziyi and Tomashenko, Natalia and Dinarelli, Marco and Parcollet, Titouan and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
``` |
yj2773/distilbert-base-uncased-fakenews-classif-task | 3937201319d34696ad961ebb2367bd94c5388fb4 | 2022-04-17T20:29:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:afl-3.0"
]
| text-classification | false | yj2773 | null | yj2773/distilbert-base-uncased-fakenews-classif-task | 6 | null | transformers | 15,545 | ---
license: afl-3.0
---
#### DATASET: [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
#### Matthews correlation: 0.998 |
BigSalmon/GPTNeo350MInformalToFormalLincoln7 | 2996c02b660fe8c91a7b18caca63415ae93b3bbe | 2022-04-04T23:01:23.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/GPTNeo350MInformalToFormalLincoln7 | 6 | null | transformers | 15,546 | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln7")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln7")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
``` |
mdroth/bert-finetuned-ner-accelerate | 826f9b062c2c3f77daf16e49f2fd0ce3ab18eb3f | 2022-05-26T18:40:17.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | mdroth | null | mdroth/bert-finetuned-ner-accelerate | 6 | null | transformers | 15,547 | Entry not found |
thangcv/distilbert-base-uncased-finetuned-emotion | 566ad5679ebd047b556af6a326434143fd036ec1 | 2022-04-07T02:01:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | thangcv | null | thangcv/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,548 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9242608108878096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.924
- F1: 0.9243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8151 | 1.0 | 250 | 0.3062 | 0.9115 | 0.9089 |
| 0.2428 | 2.0 | 500 | 0.2156 | 0.924 | 0.9243 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Linguist/t5-small-Linguists_summariser | a446263873a5cb0718370369c3f7e51918b51df5 | 2022-04-06T16:51:53.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Linguist | null | Linguist/t5-small-Linguists_summariser | 6 | null | transformers | 15,549 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-Linguists_summariser
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-Linguists_summariser
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pitspits/distilbert-base-uncased-finetuned-emotion | 4c10c1eb079b309225e8245fbcb5d4d775088109 | 2022-04-06T12:59:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pitspits | null | pitspits/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,550 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9250750482655898
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8341 | 1.0 | 250 | 0.3329 | 0.8985 | 0.8950 |
| 0.2562 | 2.0 | 500 | 0.2236 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Graphcore/hubert-base-common-language | 3c82dffabc20ef982a5e87703b8ae039f017ef12 | 2022-04-06T14:55:32.000Z | [
"pytorch",
"hubert",
"text-classification",
"dataset:common_language",
"transformers",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Graphcore | null | Graphcore/hubert-base-common-language | 6 | null | transformers | 15,551 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: hubert-base-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-common-language
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3477
- Accuracy: 0.7317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 4
- seed: 0
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 10.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
schorndorfer/distilbert-base-uncased-finetuned-emotion | 1b71aa9894454e24cddc19621e6a644903c77ddd | 2022-04-07T14:45:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | schorndorfer | null | schorndorfer/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,552 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9245161685913434
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2177
- Accuracy: 0.924
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8318 | 1.0 | 250 | 0.3067 | 0.9115 | 0.9091 |
| 0.2412 | 2.0 | 500 | 0.2177 | 0.924 | 0.9245 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mrm8488/t5-small-finetuned-wikisql-sql-nl-nl-sql | c4b3b58284d72596b57f5d9b882cf1ab930f5369 | 2022-04-07T17:41:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-wikisql-sql-nl-nl-sql | 6 | 1 | transformers | 15,553 | ---
license: apache-2.0
tags:
- generated_from_trainer
widget:
- text: "translate to SQL: How many models with BERT architecture are in the HuggingFace Hub?"
- text: "translate to English: SELECT COUNT Model FROM table WHERE Architecture = RoBERTa AND creator = Manuel Romero"
metrics:
- bleu
model-index:
- name: t5-small-finetuned-wikisql-sql-nl-nl-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql-sql-nl-nl-sql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1932
- Bleu: 41.8787
- Gen Len: 16.6251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.2655 | 1.0 | 8097 | 0.2252 | 39.7999 | 16.6893 |
| 0.2401 | 2.0 | 16194 | 0.2066 | 40.9456 | 16.6712 |
| 0.2236 | 3.0 | 24291 | 0.1985 | 41.3509 | 16.5884 |
| 0.2158 | 4.0 | 32388 | 0.1944 | 41.6988 | 16.6165 |
| 0.2122 | 5.0 | 40485 | 0.1932 | 41.8787 | 16.6251 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mdroth/bert_de_ner-finetuned-ner | ae9140ebebfeba0ddff93669ab293f0f087bcd76 | 2022-04-08T02:00:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | mdroth | null | mdroth/bert_de_ner-finetuned-ner | 6 | null | transformers | 15,554 | Entry not found |
dapang/distilbert-base-uncased-finetuned-mic | de51736d4f898fb497cccec01bfdc14aebeadac1 | 2022-04-08T03:56:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dapang | null | dapang/distilbert-base-uncased-finetuned-mic | 6 | null | transformers | 15,555 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mic
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5640
- Accuracy: 0.7809
- F1: 0.8769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.740146306575944e-05
- train_batch_size: 400
- eval_batch_size: 400
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 18 | 0.7080 | 0.7232 | 0.8394 |
| No log | 2.0 | 36 | 0.4768 | 0.8443 | 0.9156 |
| No log | 3.0 | 54 | 0.5714 | 0.7866 | 0.8806 |
| No log | 4.0 | 72 | 0.7035 | 0.7151 | 0.8339 |
| No log | 5.0 | 90 | 0.5640 | 0.7809 | 0.8769 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.0
|
ukr-models/uk-morph | 4a3e45913221ea8653a4b5ad8335200470d79320 | 2022-04-08T12:32:54.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"uk",
"transformers",
"ukrainian",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | ukr-models | null | ukr-models/uk-morph | 6 | null | transformers | 15,556 | ---
language:
- uk
tags:
- ukrainian
widget:
- text: "Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."
license: mit
---
## Model Description
Fine-tuning of [XLM-RoBERTa-Uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) model on [synthetic morphological dataset](https://huggingface.co/datasets/ukr-models/Ukr-Synth), returns both UPOS and morphological features (joined by double underscore symbol)
## How to Use
Huggingface pipeline way (returns tokens with labels):
```py
from transformers import TokenClassificationPipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-morph')
model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-morph')
ppln = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
ppln("Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера.")
```
If you wish to get predictions split by words, not by tokens, you may use the following approach (download script get_predictions.py from the repository, it uses [package tokenize_uk](https://pypi.org/project/tokenize_uk/) for splitting)
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from get_predictions import get_word_predictions
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-morph')
model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-morph')
get_word_predictions(model, tokenizer, ["Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."])
```
|
philschmid/roberta-large-sst2 | 7d2599d698b7a805b6831e15e830e60a0b07bdb4 | 2022-04-08T08:03:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | philschmid | null | philschmid/roberta-large-sst2 | 6 | null | transformers | 15,557 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-large-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9644495412844036
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1400
- Accuracy: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3688 | 1.0 | 264 | 0.1444 | 0.9564 |
| 0.1529 | 2.0 | 528 | 0.1502 | 0.9518 |
| 0.107 | 3.0 | 792 | 0.1388 | 0.9530 |
| 0.0666 | 4.0 | 1056 | 0.1400 | 0.9644 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
jicoc22578/autotrain-livedoor_news-722922024 | 5b5228a97230652a0b99b68d3ec21619c47e77bc | 2022-04-09T10:47:55.000Z | [
"pytorch",
"bert",
"text-classification",
"ja",
"dataset:jicoc22578/autotrain-data-livedoor_news",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | jicoc22578 | null | jicoc22578/autotrain-livedoor_news-722922024 | 6 | null | transformers | 15,558 | ---
tags: autotrain
language: ja
widget:
- text: "Windows 11搭載PCを買ったら最低限やっておきたいこと"
- text: "3月デスクトップOSシェア、Windowsが増加しMacが減少"
- text: "raytrek、Core i7-12700HとRTX 3070 Tiを搭載するノートPC"
datasets:
- jicoc22578/autotrain-data-livedoor_news
co2_eq_emissions: 0.019299491458156143
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 722922024
- CO2 Emissions (in grams): 0.019299491458156143
## Validation Metrics
- Loss: 0.19609540700912476
- Accuracy: 0.9457627118644067
- Macro F1: 0.9404319054946133
- Micro F1: 0.9457627118644067
- Weighted F1: 0.9456037443251943
- Macro Precision: 0.9420917371721244
- Micro Precision: 0.9457627118644067
- Weighted Precision: 0.9457910238180336
- Macro Recall: 0.9391783746329772
- Micro Recall: 0.9457627118644067
- Weighted Recall: 0.9457627118644067
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/jicoc22578/autotrain-livedoor_news-722922024
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jicoc22578/autotrain-livedoor_news-722922024", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jicoc22578/autotrain-livedoor_news-722922024", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
malcolm/TSC_SentimentA_IMDBAmznTSC_2 | c8d819493594360c5a344e8bda67fdc9ea783cb7 | 2022-04-10T09:43:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | malcolm | null | malcolm/TSC_SentimentA_IMDBAmznTSC_2 | 6 | null | transformers | 15,559 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TSC_SentimentA_IMDBAmznTSC_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_SentimentA_IMDBAmznTSC_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1985
- Accuracy: 0.9365
- F1: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nkn002/longformer_fakenews_cls | 85873ce19bfafe4b568c5ad99b622a4d504601fe | 2022-04-11T01:37:39.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
]
| text-classification | false | nkn002 | null | nkn002/longformer_fakenews_cls | 6 | null | transformers | 15,560 | Entry not found |
JminJ/tunibElectra_base_Bad_Sentence_Classifier | 3ad90670fb3be0a9da9021087cede576ae933f4c | 2022-04-11T01:50:02.000Z | [
"pytorch",
"electra",
"text-classification",
"arxiv:2003.10555",
"transformers"
]
| text-classification | false | JminJ | null | JminJ/tunibElectra_base_Bad_Sentence_Classifier | 6 | null | transformers | 15,561 | # Bad_text_classifier
## Model 소개
인터넷 상에 퍼져있는 여러 댓글, 채팅이 민감한 내용인지 아닌지를 판별하는 모델을 공개합니다. 해당 모델은 공개데이터를 사용해 label을 수정하고 데이터들을 합쳐 구성해 finetuning을 진행하였습니다. 해당 모델이 언제나 모든 문장을 정확히 판단이 가능한 것은 아니라는 점 양해해 주시면 감사드리겠습니다.
```
NOTE)
공개 데이터의 저작권 문제로 인해 모델 학습에 사용된 변형된 데이터는 공개 불가능하다는 점을 밝힙니다.
또한 해당 모델의 의견은 제 의견과 무관하다는 점을 미리 밝힙니다.
```
## Dataset
### data label
* **0 : bad sentence**
* **1 : not bad sentence**
### 사용한 dataset
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
### dataset 가공 방법
기존 이진 분류가 아니였던 두 데이터를 이진 분류 형태로 labeling을 다시 해준 뒤, Korean HateSpeech Dataset중 label 1(not bad sentence)만을 추려 가공된 Korean Unsmile Dataset에 합쳐 주었습니다.
</br>
**Korean Unsmile Dataset에 clean으로 labeling 되어있던 데이터 중 몇개의 데이터를 0 (bad sentence)으로 수정하였습니다.**
* "~노"가 포함된 문장 중, "이기", "노무"가 포함된 데이터는 0 (bad sentence)으로 수정
* "좆", "봊" 등 성 관련 뉘앙스가 포함된 데이터는 0 (bad sentence)으로 수정
</br>
## Model Training
* huggingface transformers의 ElectraForSequenceClassification를 사용해 finetuning을 수행하였습니다.
* 한국어 공개 Electra 모델 중 3가지 모델을 사용해 각각 학습시켜주었습니다.
### use model
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
## How to use model?
```PYTHON
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained('JminJ/tunibElectra_base_Bad_Sentence_Classifier')
tokenizer = AutoTokenizer.from_pretrained('JminJ/tunibElectra_base_Bad_Sentence_Classifier')
```
## Model Valid Accuracy
| mdoel | accuracy |
| ---------- | ---------- |
| kcElectra_base_fp16_wd_custom_dataset | 0.8849 |
| tunibElectra_base_fp16_wd_custom_dataset | 0.8726 |
| koElectra_base_fp16_wd_custom_dataset | 0.8434 |
```
Note)
모든 모델은 동일한 seed, learning_rate(3e-06), weight_decay lambda(0.001), batch_size(128)로 학습되었습니다.
```
## Contact
* [email protected]
</br></br>
## Github
* https://github.com/JminJ/Bad_text_classifier
</br></br>
## Reference
* [Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA)
* [monologg/koELECTRA](https://github.com/monologg/KoELECTRA)
* [tunib/electra-ko-base](https://huggingface.co/tunib/electra-ko-base)
* [smilegate-ai/Korean Unsmile Dataset](https://github.com/smilegate-ai/korean_unsmile_dataset)
* [kocohub/Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)
* [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://arxiv.org/abs/2003.10555)
|
SiriusRen/my-awesome-model2 | 30f10dbf6b2c72ea54343b11554a531894d5ba2a | 2022-04-11T08:45:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SiriusRen | null | SiriusRen/my-awesome-model2 | 6 | null | transformers | 15,562 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my-awesome-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-model2
This model is a fine-tuned version of [SiriusRen/my-awesome-model](https://huggingface.co/SiriusRen/my-awesome-model) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 2.0.1.dev0
- Tokenizers 0.11.6
|
CapoCapped/T5Base | 7692c95d2b367aa27ef1fce274ae273e68fab37d | 2022-04-12T12:53:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"summarization",
"autotrain_compatible"
]
| summarization | false | CapoCapped | null | CapoCapped/T5Base | 6 | null | transformers | 15,563 | ---
tags:
- summarization
--- |
luckydog/distilbert-base-uncased-finetuned-emotion | 79cebd0fbcdf1e6ab24cd8dd66092872c9b62891 | 2022-04-12T12:36:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | luckydog | null | luckydog/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,564 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.8980758869010411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3298
- Accuracy: 0.9
- F1: 0.8981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2761 | 1.0 | 250 | 0.6036 | 0.814 | 0.7881 |
| 0.4081 | 2.0 | 500 | 0.3298 | 0.9 | 0.8981 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nntadotzip/bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022 | 474a659284129966216fca73d777199a26034dad | 2022-04-12T08:14:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | nntadotzip | null | nntadotzip/bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022 | 6 | null | transformers | 15,565 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-IUChatbot-ontologyDts-bertBaseCased-bertTokenizer-12April2022
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 357 | 0.4760 |
| 0.6305 | 2.0 | 714 | 0.3957 |
| 0.4345 | 3.0 | 1071 | 0.3856 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jjzha/dajobbert-base-cased | 30445f3095796c604dfd9a66f2a7bf89e0e620b9 | 2022-07-26T08:15:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"da",
"transformers",
"job postings",
"DaJobBERT",
"autotrain_compatible"
]
| fill-mask | false | jjzha | null | jjzha/dajobbert-base-cased | 6 | 1 | transformers | 15,566 | ---
language:
- da
tags:
- job postings
- DaJobBERT
---
# JobBERT
This is the DaJobBERT model from:
Mike Zhang, Kristian Nørgaard Jensen, and Barbara Plank. __Kompetencer: Fine-grained Skill Classification in Danish Job Postings via Distant Supervision and Transfer Learning__. Proceedings of the Language Resources and Evaluation Conference (LREC). 2022.
This model is continuously pre-trained from a `dabert-base-cased`: https://huggingface.co/Maltehb/danish-bert-botxo checkpoint on ~24.5M Danish sentences from job postings. More information can be found in the paper.
If you use this model, please cite the following paper:
```
@InProceedings{zhang-jensen-plank:2022:LREC,
author = {Zhang, Mike and Jensen, Kristian N{\o}rgaard and Plank, Barbara},
title = {Kompetencer: Fine-grained Skill Classification in Danish Job Postings via Distant Supervision and Transfer Learning},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {436--447},
abstract = {Skill Classification (SC) is the task of classifying job competences from job postings. This work is the first in SC applied to Danish job vacancy data. We release the first Danish job posting dataset: *Kompetencer* (\_en\_: competences), annotated for nested spans of competences. To improve upon coarse-grained annotations, we make use of The European Skills, Competences, Qualifications and Occupations (ESCO; le Vrang et al., (2014)) taxonomy API to obtain fine-grained labels via distant supervision. We study two setups: The zero-shot and few-shot classification setting. We fine-tune English-based models and RemBERT (Chung et al., 2020) and compare them to in-language Danish models. Our results show RemBERT significantly outperforms all other models in both the zero-shot and the few-shot setting.},
url = {https://aclanthology.org/2022.lrec-1.46}
}
``` |
AndrewR/distilbert-base-uncased-finetuned-imdb | 0047231f5500771085867e2e1145777ca4ef0bc6 | 2022-04-12T16:02:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | AndrewR | null | AndrewR/distilbert-base-uncased-finetuned-imdb | 6 | null | transformers | 15,567 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5273 | 1.0 | 157 | 2.4557 |
| 2.4839 | 2.0 | 314 | 2.4263 |
| 2.4696 | 3.0 | 471 | 2.3919 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Chikashi/t5-small-finetuned-cnndm-wikihow | 9bd7768b09dc8120fac1bd80c10d82a9bdc5790f | 2022-04-13T01:51:44.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm-wikihow | 6 | null | transformers | 15,568 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm-wikihow
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 27.5037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm-wikihow
This model is a fine-tuned version of [Sevil/t5-small-finetuned-cnndm_3epoch_v2](https://huggingface.co/Sevil/t5-small-finetuned-cnndm_3epoch_v2) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2653
- Rouge1: 27.5037
- Rouge2: 10.8442
- Rougel: 23.4674
- Rougelsum: 26.7997
- Gen Len: 18.5558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.8459 | 0.13 | 5000 | 2.5755 | 25.2929 | 8.7852 | 21.2379 | 24.5649 | 18.4758 |
| 2.7251 | 0.25 | 10000 | 2.5189 | 25.33 | 9.0505 | 21.4892 | 24.6523 | 18.4513 |
| 2.6696 | 0.38 | 15000 | 2.4805 | 26.3909 | 9.6858 | 22.3589 | 25.7297 | 18.4649 |
| 2.647 | 0.51 | 20000 | 2.4491 | 25.9234 | 9.3936 | 22.0086 | 25.2342 | 18.5558 |
| 2.5973 | 0.64 | 25000 | 2.4251 | 26.4988 | 9.8197 | 22.6201 | 25.8407 | 18.3438 |
| 2.5916 | 0.76 | 30000 | 2.4022 | 26.3149 | 9.8432 | 22.3695 | 25.6581 | 18.4506 |
| 2.5691 | 0.89 | 35000 | 2.3801 | 26.4198 | 9.8848 | 22.4856 | 25.7847 | 18.5381 |
| 2.5365 | 1.02 | 40000 | 2.3755 | 26.5846 | 10.0287 | 22.667 | 25.9606 | 18.5608 |
| 2.4649 | 1.14 | 45000 | 2.3663 | 26.5925 | 10.0569 | 22.6191 | 25.9247 | 18.5803 |
| 2.4539 | 1.27 | 50000 | 2.3490 | 26.9735 | 10.2389 | 22.9536 | 26.282 | 18.5126 |
| 2.4578 | 1.4 | 55000 | 2.3374 | 26.7878 | 10.2275 | 22.849 | 26.1188 | 18.6162 |
| 2.4365 | 1.53 | 60000 | 2.3266 | 27.1171 | 10.403 | 23.0596 | 26.4284 | 18.6128 |
| 2.428 | 1.65 | 65000 | 2.3209 | 27.1762 | 10.578 | 23.1577 | 26.5007 | 18.5246 |
| 2.4293 | 1.78 | 70000 | 2.3145 | 27.0896 | 10.5146 | 23.1502 | 26.4338 | 18.4604 |
| 2.4335 | 1.91 | 75000 | 2.2979 | 27.3373 | 10.6273 | 23.2944 | 26.6725 | 18.5403 |
| 2.3981 | 2.03 | 80000 | 2.3008 | 27.1857 | 10.6455 | 23.1333 | 26.5203 | 18.5412 |
| 2.3395 | 2.16 | 85000 | 2.2908 | 27.3123 | 10.7063 | 23.3126 | 26.626 | 18.4265 |
| 2.3463 | 2.29 | 90000 | 2.2869 | 27.5328 | 10.7662 | 23.4527 | 26.8613 | 18.5664 |
| 2.3481 | 2.42 | 95000 | 2.2802 | 27.4799 | 10.7826 | 23.4538 | 26.7912 | 18.5449 |
| 2.3345 | 2.54 | 100000 | 2.2774 | 27.3182 | 10.724 | 23.3276 | 26.669 | 18.5908 |
| 2.3254 | 2.67 | 105000 | 2.2713 | 27.3942 | 10.777 | 23.3918 | 26.7036 | 18.5681 |
| 2.3369 | 2.8 | 110000 | 2.2666 | 27.5976 | 10.9144 | 23.5832 | 26.9147 | 18.5471 |
| 2.3269 | 2.93 | 115000 | 2.2653 | 27.5037 | 10.8442 | 23.4674 | 26.7997 | 18.5558 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mekondjo/distilbert-base-uncased-finetuned-emotion | 62c49ea83c339ad5dae7b9d0d353f296cb8e8c70 | 2022-04-12T15:53:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | mekondjo | null | mekondjo/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,569 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9248167911304236
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.848 | 1.0 | 250 | 0.3157 | 0.9075 | 0.9059 |
| 0.253 | 2.0 | 500 | 0.2219 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
smeoni/nbme-clinical-longformer | 1b808ca5681169432b657c388ff36913ab8c2d28 | 2022-04-12T18:49:58.000Z | [
"pytorch",
"longformer",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | smeoni | null | smeoni/nbme-clinical-longformer | 6 | null | transformers | 15,570 | Entry not found |
lewtun/roberta-large-finetuned-clinc | 1e90b7dca6a89854a502f8915ca2215b252ff772 | 2022-04-13T08:48:32.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc | 6 | null | transformers | 15,571 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9767741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1545
- Accuracy: 0.9768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0548 | 1.0 | 120 | 5.0359 | 0.0071 |
| 4.4725 | 2.0 | 240 | 2.9385 | 0.7558 |
| 1.8924 | 3.0 | 360 | 0.6456 | 0.9374 |
| 0.4552 | 4.0 | 480 | 0.2297 | 0.9626 |
| 0.1589 | 5.0 | 600 | 0.1545 | 0.9768 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
simonnedved/codet5-large-v1 | cb3e372ddc3a70b4acaa74b65868409e9f05e908 | 2022-04-13T15:44:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | simonnedved | null | simonnedved/codet5-large-v1 | 6 | null | transformers | 15,572 | ---
license: apache-2.0
---
|
CenIA/distillbert-base-spanish-uncased-finetuned-qa-sqac | 3610e05a0c30873b2cbe76f8928db05d348ce516 | 2022-04-14T21:57:30.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-qa-sqac | 6 | 3 | transformers | 15,573 | Entry not found |
ddobokki/unsup-simcse-klue-roberta-small | 26f03fa19cb1166e0df7f01384fb872fa02b2e22 | 2022-04-16T04:26:10.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"ko"
]
| sentence-similarity | false | ddobokki | null | ddobokki/unsup-simcse-klue-roberta-small | 6 | null | sentence-transformers | 15,574 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- ko
---
# ddobokki/unsup-simcse-klue-roberta-small
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ddobokki/unsup-simcse-klue-roberta-small')
embeddings = model.encode(sentences)
print(embeddings)
```
(개발중)
git:https://github.com/ddobokki/KoSimCSE
|
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-ar | 5565bbe9a4a93ea306254a065d5bfeb22436a12d | 2022-04-16T04:42:02.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | ahmeddbahaa | null | ahmeddbahaa/mT5_multilingual_XLSum-finetuned-ar | 6 | null | transformers | 15,575 | ---
tags:
- generated_from_trainer
model-index:
- name: mT5_multilingual_XLSum-finetuned-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-ar
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
GioReg/bertdbmdzIhate | a8815e9edf34ed056217316990f1d4deed3f168f | 2022-04-15T12:03:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | GioReg | null | GioReg/bertdbmdzIhate | 6 | null | transformers | 15,576 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertdbmdzIhate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertdbmdzIhate
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6880
- Accuracy: 0.726
- F1: 0.4170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
birgermoell/psst-fairseq-rir | 66fd69e0e90a071593291e5746de9d7a29f38878 | 2022-04-15T13:57:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"transformers",
"license:apache-2.0"
]
| automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-fairseq-rir | 6 | null | transformers | 15,577 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
---
This model is trained on the PSST Challenge data, with a subset of TIMIT that was augmented using Room Impulse Response (RIR). A file containing the list of TIMIT IDs is in the repository (`timit-ids.txt`)
The model was finetuned on [Wav2vec 2.0 Base, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec), and the results on the validation set were **PER:** 21\.8%, **FER:** 9\.6%.
|
MartinoMensio/racism-models-regression-w-m-vote-epoch-1 | 5cc67f99cabe7c5b4b1ad0753652464c5fe81401 | 2022-05-04T16:18:39.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-regression-w-m-vote-epoch-1 | 6 | null | transformers | 15,578 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `regression-w-m-vote-epoch-1`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from transformers.pipelines import TextClassificationPipeline
class TextRegressionPipeline(TextClassificationPipeline):
"""
Class based on the TextClassificationPipeline from transformers.
The difference is that instead of being based on a classifier, it is based on a regressor.
You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline.
"""
def __init__(self, **kwargs):
"""
Builds a new Pipeline based on regression.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold = kwargs.pop("regression_threshold", None)
super().__init__(**kwargs)
def __call__(self, *args, **kwargs):
"""
You can also specify the regression threshold when you call the pipeline.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold_call = kwargs.pop("regression_threshold", None)
result = super().__call__(*args, **kwargs)
return result
def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False):
outputs = model_outputs["logits"][0]
outputs = outputs.numpy()
scores = outputs
score = scores[0]
regression_threshold = self.regression_threshold
# override the specific threshold if it is specified in the call
if self.regression_threshold_call:
regression_threshold = self.regression_threshold_call
if regression_threshold:
return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score}
else:
return {"score": score}
model_name = 'regression-w-m-vote-epoch-1'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
# just get the score of regression
print(pipe(texts))
# [{'score': 0.8378907}, {'score': 0.33399782}]
# or also specify a threshold to cut racist/non-racist
print(pipe(texts, regression_threshold=0.9))
# [{'label': 'non-racist', 'score': 0.8378907}, {'label': 'non-racist', 'score': 0.33399782}]
```
For more details, see https://github.com/preyero/neatclass22
|
MartinoMensio/racism-models-regression-w-m-vote-epoch-2 | eb49a75ddb45c166af578a6b9f468fbab8bb5bd7 | 2022-05-04T16:20:44.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"transformers",
"license:mit"
]
| text-classification | false | MartinoMensio | null | MartinoMensio/racism-models-regression-w-m-vote-epoch-2 | 6 | null | transformers | 15,579 | ---
language: es
license: mit
widget:
- text: "y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!"
---
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `regression-w-m-vote-epoch-2`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from transformers.pipelines import TextClassificationPipeline
class TextRegressionPipeline(TextClassificationPipeline):
"""
Class based on the TextClassificationPipeline from transformers.
The difference is that instead of being based on a classifier, it is based on a regressor.
You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline.
"""
def __init__(self, **kwargs):
"""
Builds a new Pipeline based on regression.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold = kwargs.pop("regression_threshold", None)
super().__init__(**kwargs)
def __call__(self, *args, **kwargs):
"""
You can also specify the regression threshold when you call the pipeline.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold_call = kwargs.pop("regression_threshold", None)
result = super().__call__(*args, **kwargs)
return result
def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False):
outputs = model_outputs["logits"][0]
outputs = outputs.numpy()
scores = outputs
score = scores[0]
regression_threshold = self.regression_threshold
# override the specific threshold if it is specified in the call
if self.regression_threshold_call:
regression_threshold = self.regression_threshold_call
if regression_threshold:
return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score}
else:
return {"score": score}
model_name = 'regression-w-m-vote-epoch-2'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
# just get the score of regression
print(pipe(texts))
# [{'score': 0.8367272}, {'score': 0.4402479}]
# or also specify a threshold to cut racist/non-racist
print(pipe(texts, regression_threshold=0.9))
# [{'label': 'non-racist', 'score': 0.8367272}, {'label': 'non-racist', 'score': 0.4402479}]
```
For more details, see https://github.com/preyero/neatclass22
|
profoz/distilbert-toxic-classifier | db64ff81614697fc27ae5f5547bbb36be50c9996 | 2022-04-15T19:07:38.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | profoz | null | profoz/distilbert-toxic-classifier | 6 | null | transformers | 15,580 | ## DistilbERT Toxic Classifier |
jason9693/soongsil-bert-small-apeach | 2272a3ee32ad7a69080b52af82e96ed2c688a5f1 | 2022-04-16T14:19:36.000Z | [
"pytorch",
"roberta",
"text-classification",
"ko",
"dataset:jason9693/APEACH",
"transformers",
"co2_eq_emissions"
]
| text-classification | false | jason9693 | null | jason9693/soongsil-bert-small-apeach | 6 | null | transformers | 15,581 | ---
language: ko
widget:
- text: "응 어쩔티비~~"
datasets:
- jason9693/APEACH
co2_eq_emissions: 0.01856239042036965
--- |
rmihaylov/gpt2-medium-bg | 5db5a5d613dfa2201bafea52861b72ef3840ba4d | 2022-04-16T18:29:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"transformers",
"torch",
"license:mit"
]
| text-generation | false | rmihaylov | null | rmihaylov/gpt2-medium-bg | 6 | null | transformers | 15,582 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# GPT-2
Pretrained model on Bulgarian language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
This is the **MEDIUM** version.
The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
## Intended uses & limitations
You can use the raw model for:
- text generation
- auto-complete
- spelling correction
Or fine-tune it to a downstream task.
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model_id = "rmihaylov/gpt2-medium-bg"
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
>>>
>>> input_ids = tokenizer.encode(
>>> "Здравей,",
>>> add_special_tokens=False,
>>> return_tensors='pt')
>>>
>>> output_ids = model.generate(
>>> input_ids,
>>> do_sample=True,
>>> max_length=50,
>>> top_p=0.92,
>>> pad_token_id=2,
>>> top_k=0)
>>>
>>> output = tokenizer.decode(output_ids[0])
>>>
>>> output = output.replace('<|endoftext|>', '\n\n\n')
>>> output = output.replace('<|unknown|>', '')
>>> output = output.replace('▁', ' ')
>>> output = output.replace('<|n|>', '\n')
>>>
>>> print(output)
Здравей, господин Фиш. — Добс забеляза как пребледня Ривера.
— Не си тръгвайте още. Имам да ви задам няколко въпроса.
— Благодаря, благодаря. — Фиш не изчака да му покаже, че е забелязал жеста й
```
### Limitations and bias
As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes. |
ttury/webnovel-kogpt2 | 296b5df77f4fe83e19f3abcb04aa1563591fb6ba | 2022-04-17T14:15:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ttury | null | ttury/webnovel-kogpt2 | 6 | null | transformers | 15,583 | Entry not found |
yliu337/bert_poet_classifier | 098fcb7ad2e2e998b75521a0fbe32738f86c6748 | 2022-04-17T18:39:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | yliu337 | null | yliu337/bert_poet_classifier | 6 | null | transformers | 15,584 | Entry not found |
user1/distilbert-base-uncased-finetuned-emotion | 911625966475421e14e8e57894c6d6c59cd91f60 | 2022-04-19T03:59:12.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | user1 | null | user1/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,585 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215748499839705
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2302
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8775 | 1.0 | 250 | 0.3501 | 0.894 | 0.8871 |
| 0.2658 | 2.0 | 500 | 0.2302 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133 | 61536e61804905131b4eaaddf8d10428a83ac2d8 | 2022-04-18T18:39:48.000Z | [
"pytorch",
"roberta",
"text-classification",
"unk",
"dataset:zainalq7/autotrain-data-NLU_crypto_sentiment_analysis",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | zainalq7 | null | zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133 | 6 | null | transformers | 15,586 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zainalq7/autotrain-data-NLU_crypto_sentiment_analysis
co2_eq_emissions: 0.005300030853867218
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 754123133
- CO2 Emissions (in grams): 0.005300030853867218
## Validation Metrics
- Loss: 0.387116938829422
- Accuracy: 0.8658536585365854
- Macro F1: 0.7724053724053724
- Micro F1: 0.8658536585365854
- Weighted F1: 0.8467166979362101
- Macro Precision: 0.8232219717155155
- Micro Precision: 0.8658536585365854
- Weighted Precision: 0.8516026874759421
- Macro Recall: 0.7642089093701996
- Micro Recall: 0.8658536585365854
- Weighted Recall: 0.8658536585365854
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("zainalq7/autotrain-NLU_crypto_sentiment_analysis-754123133", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Auruncus/gpt-j-6b-8bit-fine-tuned | fedeff5e7e69c69a6e6a458edc1999a9007077be | 2022-04-18T23:59:09.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
]
| text-generation | false | Auruncus | null | Auruncus/gpt-j-6b-8bit-fine-tuned | 6 | 1 | transformers | 15,587 | Entry not found |
anshr/distilbert_reward_model_01 | 41e654995dbe3797ee1fa66d55e4c22b9287262d | 2022-04-19T00:51:55.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | anshr | null | anshr/distilbert_reward_model_01 | 6 | null | transformers | 15,588 | Entry not found |
tuhailong/cross_encoder_roberta-wwm-ext_v0 | 361e575ab4a2c1a2a90525b30688a95ee22ed258 | 2022-04-20T02:41:37.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:dialogue",
"transformers",
"cross-encoder"
]
| text-classification | false | tuhailong | null | tuhailong/cross_encoder_roberta-wwm-ext_v0 | 6 | null | transformers | 15,589 | ---
language: zh
tags:
- cross-encoder
datasets:
- dialogue
---
# Data
train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs.
## Model
model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder,pretrained model is hfl/chinese-roberta-wwm-ext.
### Usage
```python
>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64)
>>> sentences = ["今天天气不错", "今天心情不错"]
>>> score = model.predict([sentences])
>>> print(score[0])
```
#### Code
train code from https://github.com/TTurn/cross-encoder |
tuhailong/cross_encoder_roberta-wwm-ext-large | 7e92109a759438804742b12fc0b81ad6b718a590 | 2022-04-20T02:39:46.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:dialogue",
"transformers",
"cross-encoder"
]
| text-classification | false | tuhailong | null | tuhailong/cross_encoder_roberta-wwm-ext-large | 6 | null | transformers | 15,590 | ---
language: zh
tags:
- cross-encoder
datasets:
- dialogue
---
# Data
train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs.
## Model
model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder,pretrained model is hfl/chinese-roberta-wwm-ext-large.
### Code
train code from https://github.com/TTurn/cross-encoder
#### Usage
```python
>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64)
>>> sentences = ["今天天气不错", "今天心情不错"]
>>> score = model.predict([sentences])
>>> print(score[0])
``` |
GPL/trec-covid-msmarco-distilbert-gpl | 83fd9fcc11f3db3ae43d9c3493f1ccfd6af07646 | 2022-04-19T15:16:52.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GPL | null | GPL/trec-covid-msmarco-distilbert-gpl | 6 | null | sentence-transformers | 15,591 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/quora-tsdae-msmarco-distilbert-gpl | 75c1f53027caa1c5c116187331cbdc1661a72790 | 2022-04-19T15:25:55.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GPL | null | GPL/quora-tsdae-msmarco-distilbert-gpl | 6 | null | sentence-transformers | 15,592 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
gbennett/distilbert-base-uncased-finetuned-emotion | 02ff9067b350b881bc5ab81cc11baf19a2f236d6 | 2022-04-19T20:26:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | gbennett | null | gbennett/distilbert-base-uncased-finetuned-emotion | 6 | null | transformers | 15,593 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
- name: F1
type: f1
value: 0.9188211123089982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2260
- Accuracy: 0.9185
- F1: 0.9188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8134 | 1.0 | 250 | 0.3117 | 0.908 | 0.9056 |
| 0.2477 | 2.0 | 500 | 0.2260 | 0.9185 | 0.9188 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
V0ltron/layoutLMTesting-different-labels | e560738449bc5c20b5c52fe2614c2d423bc1bc8d | 2022-04-20T05:58:18.000Z | [
"pytorch",
"layoutlmv2",
"text-classification",
"transformers"
]
| text-classification | false | V0ltron | null | V0ltron/layoutLMTesting-different-labels | 6 | null | transformers | 15,594 | Entry not found |
luquesky/distilbert-base-uncased-finetuned-emotion-bigger-batch-better-who-knows | bd43b658a94971fc0dc37836c6a054f0789ec714 | 2022-04-20T14:05:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | luquesky | null | luquesky/distilbert-base-uncased-finetuned-emotion-bigger-batch-better-who-knows | 6 | null | transformers | 15,595 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion-bigger-batch-better-who-knows
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-bigger-batch-better-who-knows
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Jeevesh8/feather_berts_73 | 9ef70aae9df1edd8a549f008aa538d239bd206e2 | 2022-04-20T13:44:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/feather_berts_73 | 6 | null | transformers | 15,596 | Entry not found |
Narsil/tiny-random-bart | 47b08938a3b44cb2d62f2af3811a318cc794f11b | 2022-04-20T14:41:29.000Z | [
"pytorch",
"tf",
"bart",
"transformers",
"text2text-generation"
]
| text2text-generation | false | Narsil | null | Narsil/tiny-random-bart | 6 | null | transformers | 15,597 | ---
pipeline_tag: "text2text-generation"
---
|
Raychanan/bert-base-cased-last500-SEP | 293ca176b5b3c43776fd6a30ab843efc1007ddf2 | 2022-04-20T22:50:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Raychanan | null | Raychanan/bert-base-cased-last500-SEP | 6 | null | transformers | 15,598 | Entry not found |
nnn/nezha-cn-base | cead2505ba4551c99ca7c79cf514c2a5aa686388 | 2022-04-21T02:09:33.000Z | [
"pytorch",
"transformers"
]
| null | false | nnn | null | nnn/nezha-cn-base | 6 | null | transformers | 15,599 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.