modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-06 06:27:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-06 06:24:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nateraw/my-cool-timm-model-3 | nateraw | 2021-11-15T20:08:55Z | 10 | 0 | timm | [
"timm",
"pytorch",
"tensorboard",
"image-classification",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- timm
- generated_from_trainer
datasets:
- cats_vs_dogs
model-index:
- name: my-cool-timm-model-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-cool-timm-model-3
This model is a fine-tuned version of [resnet18](https://huggingface.co/resnet18) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2455
- Acc1: 94.4175
- Acc5: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc1 | Acc5 |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-----:|
| 0.5152 | 0.14 | 10 | 0.2455 | 94.4175 | 100.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
nateraw/my-cool-timm-model-2 | nateraw | 2021-11-15T20:06:24Z | 4 | 0 | timm | [
"timm",
"pytorch",
"tensorboard",
"image-classification",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- timm
- generated_from_trainer
library_tag: timm
datasets:
- cats_vs_dogs
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-cool-timm-model-2
This model is a fine-tuned version of [resnet18](https://huggingface.co/resnet18) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2510
- Acc1: 95.2150
- Acc5: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc1 | Acc5 |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-----:|
| No log | 0.07 | 5 | 0.3436 | 92.0820 | 100.0 |
| 0.4914 | 0.14 | 10 | 0.2510 | 95.2150 | 100.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
vkk1710/xlnet-base-cased-finetuned-qqp | vkk1710 | 2021-11-15T19:25:06Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: xlnet-base-cased-finetuned-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-qqp
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the qqp dataset (part of glue dataset).
It achieves the following results on the evaluation set:
- eval_loss: 0.27
- eval_accuracy: 0.9084
- eval_f1: 0.8775
- epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
huyue012/wav2vec2-base-cynthia-timit | huyue012 | 2021-11-15T17:29:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-timit
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Wer: 0.3315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.7674 | 1.0 | 500 | 2.8994 | 1.0 |
| 1.3538 | 2.01 | 1000 | 0.5623 | 0.5630 |
| 0.5416 | 3.01 | 1500 | 0.4595 | 0.4765 |
| 0.3563 | 4.02 | 2000 | 0.4435 | 0.4328 |
| 0.2869 | 5.02 | 2500 | 0.4035 | 0.4145 |
| 0.2536 | 6.02 | 3000 | 0.4090 | 0.3945 |
| 0.2072 | 7.03 | 3500 | 0.4188 | 0.3809 |
| 0.1825 | 8.03 | 4000 | 0.4139 | 0.3865 |
| 0.1754 | 9.04 | 4500 | 0.4320 | 0.3763 |
| 0.1477 | 10.04 | 5000 | 0.4668 | 0.3699 |
| 0.1418 | 11.04 | 5500 | 0.4439 | 0.3683 |
| 0.1207 | 12.05 | 6000 | 0.4419 | 0.3678 |
| 0.115 | 13.05 | 6500 | 0.4606 | 0.3786 |
| 0.1022 | 14.06 | 7000 | 0.4403 | 0.3610 |
| 0.1019 | 15.06 | 7500 | 0.4966 | 0.3609 |
| 0.0898 | 16.06 | 8000 | 0.4675 | 0.3586 |
| 0.0824 | 17.07 | 8500 | 0.4844 | 0.3583 |
| 0.0737 | 18.07 | 9000 | 0.4801 | 0.3534 |
| 0.076 | 19.08 | 9500 | 0.4945 | 0.3529 |
| 0.0627 | 20.08 | 10000 | 0.4700 | 0.3417 |
| 0.0723 | 21.08 | 10500 | 0.4630 | 0.3449 |
| 0.0597 | 22.09 | 11000 | 0.5164 | 0.3456 |
| 0.0566 | 23.09 | 11500 | 0.4957 | 0.3401 |
| 0.0453 | 24.1 | 12000 | 0.5032 | 0.3419 |
| 0.0492 | 25.1 | 12500 | 0.5391 | 0.3387 |
| 0.0524 | 26.1 | 13000 | 0.5057 | 0.3348 |
| 0.0381 | 27.11 | 13500 | 0.5098 | 0.3331 |
| 0.0402 | 28.11 | 14000 | 0.5087 | 0.3353 |
| 0.0358 | 29.12 | 14500 | 0.4888 | 0.3315 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
lucasresck/distilbert-base-uncased-finetuned-squad | lucasresck | 2021-11-15T17:04:05Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
lidiia/autonlp-trans_class_arg-32957902 | lidiia | 2021-11-15T16:48:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:lidiia/autonlp-data-trans_class_arg",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- lidiia/autonlp-data-trans_class_arg
co2_eq_emissions: 0.9756221672668951
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 32957902
- CO2 Emissions (in grams): 0.9756221672668951
## Validation Metrics
- Loss: 0.2765039801597595
- Accuracy: 0.8939828080229226
- Precision: 0.7757009345794392
- Recall: 0.8645833333333334
- AUC: 0.9552659749670619
- F1: 0.8177339901477833
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lidiia/autonlp-trans_class_arg-32957902
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Theivaprakasham/wav2vec2-base-timit-demo-colab | Theivaprakasham | 2021-11-15T14:33:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4475
- Wer: 0.3400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6929 | 4.0 | 500 | 2.4485 | 1.0009 |
| 0.9441 | 8.0 | 1000 | 0.4848 | 0.4758 |
| 0.3016 | 12.0 | 1500 | 0.4464 | 0.4016 |
| 0.1715 | 16.0 | 2000 | 0.4666 | 0.3765 |
| 0.1277 | 20.0 | 2500 | 0.4340 | 0.3515 |
| 0.1082 | 24.0 | 3000 | 0.4544 | 0.3495 |
| 0.0819 | 28.0 | 3500 | 0.4475 | 0.3400 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
moussaKam/barthez-sentiment-classification | moussaKam | 2021-11-15T13:02:33Z | 16 | 2 | transformers | [
"transformers",
"pytorch",
"mbart",
"text-classification",
"bart",
"fr",
"arxiv:2010.12321",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- text-classification
- bart
language:
- fr
license: apache-2.0
widget:
- text: Barthez est le meilleur gardien du monde.
---
### Barthez model finetuned on opinion classification task.
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
AdapterHub/roberta-base-pf-wikihop | AdapterHub | 2021-11-15T10:44:47Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"roberta",
"adapterhub:qa/wikihop",
"en",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- roberta
- adapterhub:qa/wikihop
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-wikihop` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-wikihop", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-ud_deprel | AdapterHub | 2021-11-15T10:44:17Z | 6 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:deprel/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- roberta
- adapterhub:deprel/ud_ewt
- adapter-transformers
datasets:
- universal_dependencies
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-ud_deprel` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-ud_deprel", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-stsb | AdapterHub | 2021-11-15T10:43:55Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"adapterhub:sts/sts-b",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapterhub:sts/sts-b
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-stsb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-stsb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-sst2 | AdapterHub | 2021-11-15T10:43:33Z | 7 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"adapterhub:sentiment/sst-2",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapterhub:sentiment/sst-2
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-sst2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-sst2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-squad_v2 | AdapterHub | 2021-11-15T10:42:57Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"roberta",
"adapterhub:qa/squad2",
"en",
"dataset:squad_v2",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- roberta
- adapterhub:qa/squad2
- adapter-transformers
datasets:
- squad_v2
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-squad_v2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-squad_v2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-squad | AdapterHub | 2021-11-15T10:42:49Z | 5 | 1 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"roberta",
"adapterhub:qa/squad1",
"en",
"dataset:squad",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- roberta
- adapterhub:qa/squad1
- adapter-transformers
datasets:
- squad
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-squad` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-squad", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-rte | AdapterHub | 2021-11-15T10:41:59Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"adapterhub:nli/rte",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapterhub:nli/rte
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-rte` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-rte", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-record | AdapterHub | 2021-11-15T10:41:43Z | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"roberta",
"adapterhub:rc/record",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- roberta
- adapterhub:rc/record
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-record` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-record", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-pmb_sem_tagging | AdapterHub | 2021-11-15T10:40:53Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:semtag/pmb",
"en",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- roberta
- adapterhub:semtag/pmb
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-pmb_sem_tagging` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-pmb_sem_tagging", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-newsqa | AdapterHub | 2021-11-15T10:40:45Z | 2 | 1 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"roberta",
"en",
"dataset:newsqa",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- roberta
- adapter-transformers
datasets:
- newsqa
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-newsqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-newsqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-duorc_p | AdapterHub | 2021-11-15T10:38:23Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"roberta",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- roberta
- adapter-transformers
datasets:
- duorc
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-duorc_p` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-duorc_p", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/roberta-base-pf-comqa | AdapterHub | 2021-11-15T10:37:28Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"roberta",
"en",
"dataset:com_qa",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- roberta
- adapter-transformers
datasets:
- com_qa
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-comqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-comqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-ud_deprel | AdapterHub | 2021-11-15T10:36:00Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"token-classification",
"bert",
"adapterhub:deprel/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
tags:
- token-classification
- bert
- adapterhub:deprel/ud_ewt
- adapter-transformers
datasets:
- universal_dependencies
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-ud_deprel` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_deprel", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-sst2 | AdapterHub | 2021-11-15T10:35:32Z | 42 | 1 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:sentiment/sst-2",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapterhub:sentiment/sst-2
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-sst2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sst2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-record | AdapterHub | 2021-11-15T10:34:16Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:rc/record",
"en",
"arxiv:2104.08247",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- text-classification
- bert
- adapterhub:rc/record
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-record` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-record", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-quoref | AdapterHub | 2021-11-15T10:34:05Z | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"bert",
"en",
"dataset:quoref",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- bert
- adapter-transformers
datasets:
- quoref
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-quoref` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quoref", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
AdapterHub/bert-base-uncased-pf-cq | AdapterHub | 2021-11-15T10:31:36Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"question-answering",
"bert",
"adapterhub:qa/cq",
"en",
"arxiv:2104.08247",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
tags:
- question-answering
- bert
- adapterhub:qa/cq
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-cq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
``` |
DeepPavlov/xlm-roberta-large-en-ru | DeepPavlov | 2021-11-15T08:46:05Z | 527 | 5 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"en",
"ru",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
language:
- en
- ru
---
# XLM-RoBERTa-Large-En-Ru
## Model description
This model is a version XLM-RoBERTa with embeddings and vocabulary reduced to most frequent tokens in English and Russian.
|
ComCom/gpt2-large | ComCom | 2021-11-15T07:26:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2-medium)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다.
|
ComCom/gpt2-medium | ComCom | 2021-11-15T07:08:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2-medium)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다.
|
ComCom/gpt2 | ComCom | 2021-11-15T04:58:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다. |
bhuvaneswari/t5-small-text_summarization | bhuvaneswari | 2021-11-15T04:29:51Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-text_summarization
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-text_summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4591
- Rouge1: 28.6917
- Rouge2: 7.976
- Rougel: 22.6383
- Rougelsum: 22.6353
- Gen Len: 18.8185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7006 | 1.0 | 8162 | 2.4591 | 28.6917 | 7.976 | 22.6383 | 22.6353 | 18.8185 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
phailyoor/distilbert-base-uncased-finetuned-yahd-twval-hptune | phailyoor | 2021-11-15T02:50:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-yahd-twval-hptune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd-twval-hptune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3727
- Accuracy: 0.2039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.1638 | 1.0 | 10106 | 2.1944 | 0.3646 |
| 1.7982 | 2.0 | 20212 | 2.6390 | 0.3333 |
| 1.3279 | 3.0 | 30318 | 3.1526 | 0.3095 |
| 0.8637 | 4.0 | 40424 | 4.8368 | 0.2470 |
| 0.5727 | 5.0 | 50530 | 6.3727 | 0.2039 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
phailyoor/distilbert-base-uncased-finetuned-yahd-twval | phailyoor | 2021-11-14T19:41:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-yahd-twval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd-twval
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2540
- Accuracy: 0.2664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.1967 | 1.0 | 10086 | 2.9662 | 0.2068 |
| 1.865 | 2.0 | 20172 | 2.9499 | 0.3229 |
| 1.5135 | 3.0 | 30258 | 3.3259 | 0.3036 |
| 1.2077 | 4.0 | 40344 | 3.8351 | 0.2902 |
| 1.0278 | 5.0 | 50430 | 4.2540 | 0.2664 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xls-r-300m-common_voice-tr-ft | patrickvonplaten | 2021-11-14T16:47:34Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"xls_r_repro_common_voice_tr",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- xls_r_repro_common_voice_tr
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4179
- Wer: 0.3071
- Cer: 0.0736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.7638 | 9.09 | 500 | 0.4763 | 0.5313 | 0.1333 |
| 0.5739 | 18.18 | 1000 | 0.4007 | 0.4357 | 0.1099 |
| 0.4343 | 27.27 | 1500 | 0.3819 | 0.4060 | 0.1012 |
| 0.4401 | 36.36 | 2000 | 0.3991 | 0.3954 | 0.1001 |
| 0.2647 | 45.45 | 2500 | 0.3901 | 0.3689 | 0.0914 |
| 0.2656 | 54.55 | 3000 | 0.4284 | 0.3463 | 0.0852 |
| 0.2586 | 63.64 | 3500 | 0.4084 | 0.3297 | 0.0804 |
| 0.2041 | 72.73 | 4000 | 0.3907 | 0.3193 | 0.0781 |
| 0.4265 | 81.82 | 4500 | 0.4265 | 0.3120 | 0.0755 |
| 0.2041 | 90.91 | 5000 | 0.4240 | 0.3071 | 0.0736 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-large-xlsr-53-common_voice-tr-ft | patrickvonplaten | 2021-11-14T16:47:13Z | 11 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"xls_r_repro_common_voice_tr",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- xls_r_repro_common_voice_tr
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- Wer: 0.3104
- Cer: 0.0737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
see under *Training Metrics* Tab.
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-xls-r-100m-common_voice-tr-ft | patrickvonplaten | 2021-11-14T16:43:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"xls_r_repro_common_voice_tr",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- xls_r_repro_common_voice_tr
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-100m-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-100m-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-100m](https://huggingface.co/facebook/wav2vec2-xls-r-100m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4113
- Wer: 1.0
- Cer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---:|:---:|
| 3.1315 | 9.09 | 500 | 3.3832 | 1.0 | 1.0 |
| 3.1163 | 18.18 | 1000 | 3.4252 | 1.0 | 1.0 |
| 3.121 | 27.27 | 1500 | 3.4051 | 1.0 | 1.0 |
| 3.1273 | 36.36 | 2000 | 3.4345 | 1.0 | 1.0 |
| 3.2257 | 45.45 | 2500 | 3.4097 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
midas/gupshup_h2e_bart | midas | 2021-11-14T02:09:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
midas/gupshup_e2e_bart | midas | 2021-11-14T02:09:24Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
midas/gupshup_h2e_pegasus | midas | 2021-11-14T02:09:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
midas/gupshup_e2e_t5 | midas | 2021-11-14T02:08:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
midas/gupshup_e2e_mbart | midas | 2021-11-14T02:06:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the Gupshup data using [this Google form](https://docs.google.com/forms/d/1zvUk7WcldVF3RCoHdWzQPzPprtSJClrnHoIOYbzaJEI/edit?ts=61381ec0).
Dataset is available for `Hinglish Dilaogues to English Summarization`(h2e) and `English Dialogues to English Summarization`(e2e). For each task, Dialogues/conversastion have `.source`(train.source) as file extension whereas Summary has `.target`(train.target) file extension. ".source" file need to be provided to `input_path` and ".target" file to `reference_path` argument in the scripts.
## Models
All model weights are available on the Huggingface model hub. Users can either directly download these weights in their local and provide this path to `model_name` argument in the scripts or use the provided alias (to `model_name` argument) in scripts directly; this will lead to download weights automatically by scripts.
Model names were aliased in "gupshup_TASK_MODEL" sense, where "TASK" can be h2e,e2e and MODEL can be mbart, pegasus, etc., as listed below.
**1. Hinglish Dialogues to English Summary (h2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_h2e_mbart](https://huggingface.co/midas/gupshup_h2e_mbart) |
| PEGASUS | [midas/gupshup_h2e_pegasus](https://huggingface.co/midas/gupshup_h2e_pegasus) |
| T5 MTL | [midas/gupshup_h2e_t5_mtl](https://huggingface.co/midas/gupshup_h2e_t5_mtl) |
| T5 | [midas/gupshup_h2e_t5](https://huggingface.co/midas/gupshup_h2e_t5) |
| BART | [midas/gupshup_h2e_bart](https://huggingface.co/midas/gupshup_h2e_bart) |
| GPT-2 | [midas/gupshup_h2e_gpt](https://huggingface.co/midas/gupshup_h2e_gpt) |
**2. English Dialogues to English Summary (e2e)**
| Model | Huggingface Alias |
|---------|-------------------------------------------------------------------------------|
| mBART | [midas/gupshup_e2e_mbart](https://huggingface.co/midas/gupshup_e2e_mbart) |
| PEGASUS | [midas/gupshup_e2e_pegasus](https://huggingface.co/midas/gupshup_e2e_pegasus) |
| T5 MTL | [midas/gupshup_e2e_t5_mtl](https://huggingface.co/midas/gupshup_e2e_t5_mtl) |
| T5 | [midas/gupshup_e2e_t5](https://huggingface.co/midas/gupshup_e2e_t5) |
| BART | [midas/gupshup_e2e_bart](https://huggingface.co/midas/gupshup_e2e_bart) |
| GPT-2 | [midas/gupshup_e2e_gpt](https://huggingface.co/midas/gupshup_e2e_gpt) |
## Inference
### Using command line
1. Clone this repo and create a python virtual environment (https://docs.python.org/3/library/venv.html). Install the required packages using
```
git clone https://github.com/midas-research/gupshup.git
pip install -r requirements.txt
```
2. run_eval script has the following arguments.
* **model_name** : Path or alias to one of our models available on Huggingface as listed above.
* **input_path** : Source file or path to file containing conversations, which will be summarized.
* **save_path** : File path where to save summaries generated by the model.
* **reference_path** : Target file or path to file containing summaries, used to calculate matrices.
* **score_path** : File path where to save scores.
* **bs** : Batch size
* **device**: Cuda devices to use.
Please make sure you have downloaded the Gupshup dataset using the above google form and provide the correct path to these files in the argument's `input_path` and `refrence_path.` Or you can simply put `test.source` and `test.target` in `data/h2e/`(hinglish to english) or `data/e2e/`(english to english) folder. For example, to generate English summaries from Hinglish dialogues using the mbart model, run the following command
```
python run_eval.py \
--model_name midas/gupshup_h2e_mbart \
--input_path data/h2e/test.source \
--save_path generated_summary.txt \
--reference_path data/h2e/test.target \
--score_path scores.txt \
--bs 8
```
Another example, to generate English summaries from English dialogues using the Pegasus model
```
python run_eval.py \
--model_name midas/gupshup_e2e_pegasus \
--input_path data/e2e/test.source \
--save_path generated_summary.txt \
--reference_path data/e2e/test.target \
--score_path scores.txt \
--bs 8
```
Please create an issue if you are facing any difficulties in replicating the results.
### References
Please cite [[1]](https://arxiv.org/abs/1910.04073) if you found the resources in this repository useful.
[1] Mehnaz, Laiba, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, and Rajiv Shah. [*GupShup: Summarizing Open-Domain Code-Switched Conversations*](https://aclanthology.org/2021.emnlp-main.499.pdf)
```
@inproceedings{mehnaz2021gupshup,
title={GupShup: Summarizing Open-Domain Code-Switched Conversations},
author={Mehnaz, Laiba and Mahata, Debanjan and Gosangi, Rakesh and Gunturi, Uma Sushmitha and Jain, Riya and Gupta, Gauri and Kumar, Amardeep and Lee, Isabelle G and Acharya, Anish and Shah, Rajiv},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={6177--6192},
year={2021}
}
```
|
zamborg/redcaps | zamborg | 2021-11-13T22:05:40Z | 1 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # Redcaps Demo
**CURRENTLY UNDER DEVELOPMENT**
|
Osiris/neutral_non_neutral_classifier | Osiris | 2021-11-13T21:54:29Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ### Introduction:
This model belongs to text-classification. You can check whether the sentence consists any emotion.
### Label Explaination:
LABEL_1: Non Neutral (have some emotions)
LABEL_0: Neutral (have no emotion)
### Usage:
```python
>>> from transformers import pipeline
>>> nnc = pipeline('text-classification', model='Osiris/neutral_non_neutral_classifier')
>>> nnc("Hello, I'm a good model.")
```
### Accuracy:
We reach 93.98% for validation dataset, and 91.92% for test dataset. |
Aidan8756/stephenKingModel | Aidan8756 | 2021-11-13T20:39:58Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:04Z | Trained on Stephen King's top 50 books as .txt files. |
ken11/bert-japanese-ner | ken11 | 2021-11-13T17:34:01Z | 33 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ner",
"japanese",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- ner
- token-classification
- japanese
- bert
language:
- ja
license: mit
---
## bert-japanese-ner
このモデルは日本語の固有表現抽出タスクを目的として、[京都大学 黒橋・褚・村脇研究室が公開しているBERT日本語Pretrainedモデル](https://nlp.ist.i.kyoto-u.ac.jp/?ku_bert_japanese)をベースに[ストックマーク株式会社が公開しているner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)でファインチューニングしたものです。
## How to use
このモデルはTokenizerに上述の京都大学BERT日本語PretrainedモデルのTokenizerを利用します。
当リポジトリにTokenizerは含まれていません。
利用する際は別途ダウンロードしてご用意ください。
また、Tokenizerとは別に[Juman++](https://nlp.ist.i.kyoto-u.ac.jp/?JUMAN%2B%2B)と[pyknp](https://nlp.ist.i.kyoto-u.ac.jp/?PyKNP)を利用します。
予めインストールしておいてください。
```py
from transformers import (
BertForTokenClassification, BertTokenizer
)
from pyknp import Juman
jumanpp = Juman()
tokenizer = BertTokenizer.from_pretrained("ダウンロードした京都大学のTokenizerのファイルパス")
model = BertForTokenClassification.from_pretrained("ken11/bert-japanese-ner")
text = "なにか文章"
juman_result = jumanpp.analysis(text)
tokenized_text = [mrph.midasi for mrph in juman_result.mrph_list()]
inputs = tokenizer(tokenized_text, return_tensors="pt", padding='max_length', truncation=True, max_length=64, is_split_into_words=True)
pred = model(**inputs).logits[0]
pred = np.argmax(pred.detach().numpy(), axis=-1)
labels = []
for i, label in enumerate(pred):
if i + 1 > len(tokenized_text):
continue
labels.append(model.config.id2label[label])
print(f"{tokenized_text[i]}: {model.config.id2label[label]}")
print(tokenized_text)
print(labels)
```
## Training Data
学習には[ストックマーク株式会社が公開しているner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)を利用しました。
便利なデータセットを公開していただきありがとうございます。
## Note
固有表現抽出のラベルは学習データセットのものをBILUO形式に変換して使用しています。
ラベルの詳細については[ner-wikipedia-datasetの概要](https://github.com/stockmarkteam/ner-wikipedia-dataset#%E6%A6%82%E8%A6%81)をご確認ください。
## Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
aditeyabaral/sentencetransformer-contrastive-roberta-base | aditeyabaral | 2021-11-13T13:29:45Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aditeyabaral/sentencetransformer-contrastive-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-contrastive-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-contrastive-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-contrastive-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-contrastive-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
MisbaHF/distilbert-base-uncased-finetuned-cola | MisbaHF | 2021-11-13T13:27:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.54109909504615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7134
- Matthews Correlation: 0.5411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5294 | 1.0 | 535 | 0.5082 | 0.4183 |
| 0.3483 | 2.0 | 1070 | 0.4969 | 0.5259 |
| 0.2355 | 3.0 | 1605 | 0.6260 | 0.5065 |
| 0.1733 | 4.0 | 2140 | 0.7134 | 0.5411 |
| 0.1238 | 5.0 | 2675 | 0.8516 | 0.5291 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
banri/distilbert-base-uncased-finetuned-cola | banri | 2021-11-13T09:52:45Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5258663312307151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7523
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.533 | 1.0 | 535 | 0.5318 | 0.3887 |
| 0.3562 | 2.0 | 1070 | 0.5145 | 0.5100 |
| 0.2429 | 3.0 | 1605 | 0.6558 | 0.4888 |
| 0.1831 | 4.0 | 2140 | 0.7523 | 0.5259 |
| 0.1352 | 5.0 | 2675 | 0.8406 | 0.5182 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
amirhossein1376/pft-clf-finetuned | amirhossein1376 | 2021-11-13T09:50:27Z | 12 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language: fa
widget:
- text: "امروز دربی دو تیم پرسپولیس و استقلال در ورزشگاه آزادی تهران برگزار میشود."
- text: "وزیر امور خارجه اردن تاکید کرد که همه کشورهای عربی خواهان روابط خوب با ایران هستند.
به گزارش ایسنا به نقل از شبکه فرانس ۲۴، ایمن الصفدی معاون نخستوزیر و وزیر امور خارجه اردن پس از کنفرانس لیبی در پاریس در گفتوگویی با فرانس ۲۴ تاکید کرد: موضع اردن روشن است، ما خواستار روابط منطقهای مبتنی بر حسن همجواری و عدم مداخله در امور داخلی هستیم. بسیاری از مسائل و مشکلات منطقه نیاز به رسیدگی از طریق گفتوگو دارد.
الصفدی هرگونه گفتوگوی با واسطه اردن با ایران را رد کرده و گفت: ما با نمایندگان هیچکس صحبت نمیکنیم و زمانی که با ایران صحبت میکنیم مستقیماً با دولت این کشور بوده و از طریق تماس تلفنی وزیر امور خارجه دو کشور.
وی تاکید کرد: همه در منطقه عربی خواستار روابط خوب با ایران هستند، اما برای تحقق این امر باید روابط بر اساس شفافیت و بر اساس اصول احترام به همسایگی و عدم مداخله در امور داخلی باشد.
"
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: pft-clf-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pft-clf-finetuned
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an "FarsNews1398" dataset. This dataset contains a collection of news that has been gathered from the farsnews website which is a news agency in Iran. You can download the dataset from [here](https://www.kaggle.com/amirhossein76/farsnews1398). I used category, abstract, and paragraphs of news for doing text classification. "abstract" and "paragraphs" for each news concatenated together and "category" used as a target for classification.
The notebook used for fine-tuning can be found [here](https://colab.research.google.com/drive/1jC2dfKRASxCY-b6bJSPkhEJfQkOA30O0?usp=sharing). I've reported loss and Matthews correlation criteria on the validation set.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Matthews Correlation: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.0634 | 1.0 | 20276 | 0.0617 | 0.9830 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
bs-modeling-metadata/html-metadata-exp1-subexp3-1898197 | bs-modeling-metadata | 2021-11-13T09:28:07Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
widget:
- text: " htmlOn ||| <h1"
---
# Work In Progress
# How to use?
To generate text with HTML, the sentence must start with ` htmlOn |||` (note the space at the beginning 😉). To generate normal text, you don't need to add anything.
# Training details
We continued the pre-training of [gpt2](https://huggingface.co/gpt2).
Dataset:[Natural_Questions_HTML_reduced_all](https://huggingface.co/datasets/SaulLu/Natural_Questions_HTML_reduced_all)
50% of the examples in the training data contained `h1`, `h2`, ..., `h6` and `p` HTML tags with only their `id` and `class` attributes. 50% of the examples were just plain text.
Training example with metadata:
```
htmlOn ||| <h1 id:firstHeading class:firstHeading>Market power</h1>
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Competition law
Basic concepts
History of competition law
Monopoly
Coercive monopoly
Natural monopoly
Barriers to entry
Herfindahl–Hirschman Index
Market concentration
Market power
SSNIP test
Relevant market
Merger control
Anti-competitive practices
Monopolization
Collusion
Formation of cartels
Price fixing
Bid rigging
Product bundling and tying
Refusal to deal
Group boycott
Essential facilities
Exclusive dealing
Dividing territories
Conscious parallelism
Predatory pricing
Misuse of patents and copyrights
Enforcement authorities and organizations
International Competition Network
List of competition regulators
v
t
e
<p>In economics and particularly in industrial organization, market power is the ability of a firm to profitably raise the market price of a good or service over marginal cost. In perfectly competitive markets, market participants have no market power. A firm with total market power can raise prices without losing any customers to competitors. Market participants that have market power are therefore sometimes referred to as "price makers" or "price setters", while those without are sometimes called "price takers". Significant market power occurs when prices exceed marginal cost and long run average cost, so the firm makes profit.</p>
<p>A firm with market power has the ability to individually affect either the total quantity or the prevailing price in the market. Price makers face a downward-sloping demand curve, such that price increases lead to a lower quantity demanded. The decrease in supply as a result of the exercise of market power creates an economic deadweight loss which is often viewed as socially undesirable. As a result, many countries have anti-trust or other legislation intended to limit the ability of firms to accrue market power. Such legislation often regulates mergers and sometimes introduces a judicial power to compel divestiture.</p>
<p>A firm usually has market power by virtue of controlling a large portion of the market. In extreme cases—monopoly and monopsony—the firm controls the entire market. However, market size alone is not the only indicator of market power. Highly concentrated markets may be contestable if there are no barriers to entry or exit, limiting the incumbent firm's ability to raise its price above competitive levels.</p>
<p>Market power gives firms the ability to engage in unilateral anti-competitive behavior.[1] Some of the behaviours that firms with market power are accused of engaging in include predatory pricing, product tying, and creation of overcapacity or other barriers to entry. If no individual participant in the market has significant market power, then anti-competitive behavior can take place only through collusion, or the exercise of a group of participants' collective market power.</p>
<p>The Lerner index and Herfindahl index may be used to measure market power.</p>
<p></p><h2>Contents</h2>
[hide]
1 Oligopoly
2 Monopoly power
3 Source
4 Measurement
5 Elasticity of demand
6 Nobel Memorial Prize
7 See also
8 References
9 Further references
<p></p><h2>Oligopoly[edit]</h2>
<p>When several firms control a significant share of market sales, the resulting market structure is called an oligopoly or oligopsony. An oligopoly may engage in collusion, either tacit or overt, and thereby exercise market power. A group of firms that explicitly agree to affect market price or output is called a cartel.</p>
<h2>Monopoly power[edit]</h2>
<p>Monopoly power is an example of market failure which occurs when one or more of the participants has the ability to influence the price or other outcomes in some general or specialized market. The most commonly discussed form of market power is that of a monopoly, but other forms such as monopsony, and more moderate versions of these two extremes, exist.</p>
<p>A well-known example of monopolistic market power is Microsoft's market share in PC operating systems. The United States v. Microsoft case dealt with an allegation that Microsoft illegally exercised its market power by bundling its web browser with its operating system. In this respect, the notion of dominance and dominant position in EU Antitrust Law is a strictly related aspect.[2]</p>
<h2>Source[edit]</h2>
<p>A monopoly can raise prices and retain customers because the monopoly has no competitors. If a customer has no other place to go to obtain the goods or services, they either pay the increased price or do without.[3] Thus the key to market power is to preclude competition through high barriers of entry. Barriers to entry that are significant sources
```
|
jamiewjm/CCGwGPT2 | jamiewjm | 2021-11-13T03:46:14Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: Chinese
widget:
- text: "五言藏头:春天到来|桃花|"
---
# 输入格式
> 格式|题目|正文
格式为下列之一:
* 五绝
* 五律
* 七绝
* 七律
* 五言排律
* 七言排律
* 五言藏头:藏头字...
* 七言藏头:藏头字...
* 对联
----- removed -----
* 诗经
* 乐府
* 楚辞
* 词牌名 (水调歌头、菩萨蛮...)
* 古诗
若为空则默认五言绝句。
题目为诗歌的主题,可为空。
正文部分可指定起始字符,可为空 |
haji2438/bertweet-base-finetuned-IGtext | haji2438 | 2021-11-13T03:10:05Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: bertweet-base-finetuned-IGtext
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-IGtext
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6741 | 1.0 | 505 | 2.2096 |
| 2.3183 | 2.0 | 1010 | 2.0934 |
| 2.2089 | 3.0 | 1515 | 2.0595 |
| 2.1473 | 4.0 | 2020 | 2.0246 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ds198799/autonlp-predict_ROI_1-29797730 | ds198799 | 2021-11-12T22:10:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:ds198799/autonlp-data-predict_ROI_1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- ds198799/autonlp-data-predict_ROI_1
co2_eq_emissions: 2.2439127664461718
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797730
- CO2 Emissions (in grams): 2.2439127664461718
## Validation Metrics
- Loss: 0.6314184069633484
- Accuracy: 0.7596774193548387
- Macro F1: 0.4740565300039588
- Micro F1: 0.7596774193548386
- Weighted F1: 0.7371623804622154
- Macro Precision: 0.6747804619412134
- Micro Precision: 0.7596774193548387
- Weighted Precision: 0.7496542175358931
- Macro Recall: 0.47743727441146655
- Micro Recall: 0.7596774193548387
- Weighted Recall: 0.7596774193548387
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ds198799/autonlp-predict_ROI_1-29797730
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
ds198799/autonlp-predict_ROI_1-29797722 | ds198799 | 2021-11-12T22:10:08Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:ds198799/autonlp-data-predict_ROI_1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- ds198799/autonlp-data-predict_ROI_1
co2_eq_emissions: 2.7516207978192737
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797722
- CO2 Emissions (in grams): 2.7516207978192737
## Validation Metrics
- Loss: 0.6113826036453247
- Accuracy: 0.7559139784946236
- Macro F1: 0.4594734612976928
- Micro F1: 0.7559139784946236
- Weighted F1: 0.7195080232106192
- Macro Precision: 0.7175166413412577
- Micro Precision: 0.7559139784946236
- Weighted Precision: 0.7383048259333735
- Macro Recall: 0.4482203645846237
- Micro Recall: 0.7559139784946236
- Weighted Recall: 0.7559139784946236
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ds198799/autonlp-predict_ROI_1-29797722
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ds198799/autonlp-predict_ROI_1-29797722", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ds198799/autonlp-predict_ROI_1-29797722", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
fadhilarkan/distilbert-base-uncased-finetuned-cola-3 | fadhilarkan | 2021-11-12T18:12:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Matthews Correlation: 1.0
Label 0 : "AIMX"
Label 1 : "OWNX"
Label 2 : "CONT"
Label 3 : "BASE"
Label 4 : "MISC"
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 192 | 0.0060 | 1.0 |
| No log | 2.0 | 384 | 0.0019 | 1.0 |
| 0.0826 | 3.0 | 576 | 0.0010 | 1.0 |
| 0.0826 | 4.0 | 768 | 0.0006 | 1.0 |
| 0.0826 | 5.0 | 960 | 0.0005 | 1.0 |
| 0.001 | 6.0 | 1152 | 0.0004 | 1.0 |
| 0.001 | 7.0 | 1344 | 0.0003 | 1.0 |
| 0.0005 | 8.0 | 1536 | 0.0003 | 1.0 |
| 0.0005 | 9.0 | 1728 | 0.0002 | 1.0 |
| 0.0005 | 10.0 | 1920 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
espnet/siddhana_fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_r-truncated-36174d | espnet | 2021-11-12T17:59:03Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc_challenge",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- fsc_challenge
license: cc-by-4.0
---
## ESPnet2 ASR pretrained model
### `siddhana/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5656007
This model was trained by siddhana using fsc_challenge/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
kensho/5gram-spanish-kenLM | kensho | 2021-11-12T11:26:01Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | This is an example of how a kenLM model can be downloaded with [PyCTCDecode](https://github.com/kensho-technologies/pyctcdecode) .
Simply run the following code:
```python
from pyctcdecode import LanguageModel
language_model = LanguageModel.load_from_hf_hub("kensho/5gram-spanish-kenLM")
```
The model was trained by [Patrick von Platen](https://huggingface.co/patrickvonplaten) for demonstration purposes. |
jcblaise/roberta-tagalog-large | jcblaise | 2021-11-12T03:25:48Z | 23 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"tagalog",
"filipino",
"tl",
"arxiv:2111.06053",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- roberta
- tagalog
- filipino
license: cc-by-sa-4.0
inference: false
---
# RoBERTa Tagalog Large
Tagalog RoBERTa trained as an improvement over our previous Tagalog pretrained Transformers. Trained with TLUnified, a newer, larger, more topically-varied pretraining corpus for Filipino. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This model is a cased model. We do not release uncased RoBERTa models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2021improving,
title={Improving Large-scale Language Models and Resources for Filipino},
author={Jan Christian Blaise Cruz and Charibeth Cheng},
journal={arXiv preprint arXiv:2111.06053},
year={2021}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/roberta-tagalog-base | jcblaise | 2021-11-12T03:25:36Z | 263 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"tagalog",
"filipino",
"tl",
"arxiv:2111.06053",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- roberta
- tagalog
- filipino
license: cc-by-sa-4.0
inference: false
---
# RoBERTa Tagalog Base
Tagalog RoBERTa trained as an improvement over our previous Tagalog pretrained Transformers. Trained with TLUnified, a newer, larger, more topically-varied pretraining corpus for Filipino. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This model is a cased model. We do not release uncased RoBERTa models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2021improving,
title={Improving Large-scale Language Models and Resources for Filipino},
author={Jan Christian Blaise Cruz and Charibeth Cheng},
journal={arXiv preprint arXiv:2111.06053},
year={2021}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/electra-tagalog-small-uncased-discriminator | jcblaise | 2021-11-12T03:24:06Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"tagalog",
"filipino",
"tl",
"license:gpl-3.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# ELECTRA Tagalog Small Uncased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/electra-tagalog-base-uncased-discriminator | jcblaise | 2021-11-12T03:23:51Z | 36 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"pretraining",
"tagalog",
"filipino",
"tl",
"license:gpl-3.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# ELECTRA Tagalog Base Uncased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/bert-tagalog-base-cased | jcblaise | 2021-11-12T03:21:35Z | 21 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"tagalog",
"filipino",
"tl",
"arxiv:2005.02068",
"arxiv:1907.00409",
"license:gpl-3.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# BERT Tagalog Base Cased
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/bert-tagalog-base-uncased-WWM | jcblaise | 2021-11-12T03:21:09Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"tagalog",
"filipino",
"tl",
"arxiv:2005.02068",
"arxiv:1907.00409",
"license:gpl-3.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# BERT Tagalog Base Uncased (Whole Word Masking)
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/distilbert-tagalog-base-cased | jcblaise | 2021-11-12T03:20:40Z | 13 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"distilbert",
"bert",
"tagalog",
"filipino",
"tl",
"arxiv:2005.02068",
"arxiv:1907.00409",
"license:gpl-3.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- distilbert
- bert
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# DistilBERT Tagalog Base Cased
Tagalog version of DistilBERT, distilled from [`bert-tagalog-base-cased`](https://huggingface.co/jcblaise/bert-tagalog-base-cased). This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
## Usage
The model can be loaded and used in both PyTorch and TensorFlow through the HuggingFace Transformers package.
```python
from transformers import TFAutoModel, AutoModel, AutoTokenizer
# TensorFlow
model = TFAutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased', from_pt=True)
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
# PyTorch
model = AutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased')
tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
```
Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2020establishing,
title={Establishing Baselines for Text Classification in Low-Resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:2005.02068},
year={2020}
}
@article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
huggingartists/hyuna | huggingartists | 2021-11-11T21:31:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/hyuna",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/hyuna
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e802afac5a0100ca75e520f954182f73.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">HyunA (현아)</div>
<a href="https://genius.com/artists/hyuna">
<div style="text-align: center; font-size: 14px;">@hyuna</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from HyunA (현아).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/hyuna).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/hyuna")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3uo94mxd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on HyunA (현아)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1o8t0mq0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1o8t0mq0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/hyuna')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/hyuna")
model = AutoModelWithLMHead.from_pretrained("huggingartists/hyuna")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/as-i-lay-dying | huggingartists | 2021-11-11T19:15:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/as-i-lay-dying",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/as-i-lay-dying
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/1584118378f9cfa83c281027ef8b2141.528x528x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">As I Lay Dying</div>
<a href="https://genius.com/artists/as-i-lay-dying">
<div style="text-align: center; font-size: 14px;">@as-i-lay-dying</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from As I Lay Dying.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/as-i-lay-dying).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/as-i-lay-dying")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2zq9ub8b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on As I Lay Dying's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/cjg5ac7f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/cjg5ac7f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/as-i-lay-dying')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/as-i-lay-dying")
model = AutoModelWithLMHead.from_pretrained("huggingartists/as-i-lay-dying")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
vuiseng9/bert-base-uncased-squadv1-59.6-sparse | vuiseng9 | 2021-11-11T18:13:58Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
vuiseng9/bert-base-uncased-squadv1-65.1-sparse | vuiseng9 | 2021-11-11T18:13:39Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
vuiseng9/bert-base-uncased-squadv1-72.9-sparse | vuiseng9 | 2021-11-11T18:13:18Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
huggingface-course/bert-finetuned-squad | huggingface-course | 2021-11-11T17:49:56Z | 757 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: test-bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-bert-finetuned-squad
This model was trained from scratch on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
huggingface-course/marian-finetuned-kde4-en-to-fr | huggingface-course | 2021-11-11T17:45:32Z | 306 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: test-marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.94161337775576
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.9416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
huggingface-course/distilbert-base-uncased-finetuned-imdb | huggingface-course | 2021-11-11T17:42:21Z | 463 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.708 | 1.0 | 157 | 2.4715 |
| 2.5627 | 2.0 | 314 | 2.4145 |
| 2.5385 | 3.0 | 471 | 2.4451 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
huggingface-course/mt5-small-finetuned-amazon-en-es | huggingface-course | 2021-11-11T17:26:47Z | 453 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0285
- Rouge1: 16.9728
- Rouge2: 8.2969
- Rougel: 16.8366
- Rougelsum: 16.8510
- Gen Len: 10.1597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 6.4205 | 1.0 | 1209 | 3.3904 | 7.3124 | 2.1083 | 7.0649 | 7.0966 | 4.7269 |
| 3.7818 | 2.0 | 2418 | 3.1762 | 10.5437 | 3.0706 | 10.4618 | 10.4713 | 5.3697 |
| 3.4672 | 3.0 | 3627 | 3.1304 | 10.4674 | 3.0531 | 10.2156 | 10.2549 | 5.9748 |
| 3.3179 | 4.0 | 4836 | 3.1170 | 11.2847 | 3.3152 | 11.1387 | 11.146 | 6.1723 |
| 3.2048 | 5.0 | 6045 | 3.1069 | 11.5212 | 3.1957 | 11.2117 | 11.2044 | 6.042 |
| 3.1211 | 6.0 | 7254 | 3.1028 | 11.8104 | 3.6482 | 11.5535 | 11.5259 | 6.0462 |
| 3.0724 | 7.0 | 8463 | 3.1001 | 11.7336 | 3.6575 | 11.4403 | 11.4738 | 5.9454 |
| 3.0476 | 8.0 | 9672 | 3.0983 | 11.8061 | 3.6575 | 11.4999 | 11.5414 | 5.9286 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.2.dev0
- Tokenizers 0.10.3
|
healx/biomedical-dpr-qry-encoder | healx | 2021-11-11T10:35:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"arxiv:2109.08564",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | DPR query encoder for Biomedical slot filling see https://arxiv.org/abs/2109.08564 for details.
Load with:
```python
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizerFast
qry_encoder = DPRQuestionEncoder.from_pretrained('healx/biomedical-dpr-qry-encoder')
qry_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
``` |
jcblaise/electra-tagalog-base-cased-generator | jcblaise | 2021-11-11T06:19:45Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"tagalog",
"filipino",
"tl",
"license:gpl-3.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Cased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
jcblaise/electra-tagalog-base-uncased-generator | jcblaise | 2021-11-11T06:19:05Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"tagalog",
"filipino",
"tl",
"license:gpl-3.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
# ELECTRA Tagalog Base Uncased Generator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
wangsheng/autonlp-poi_train-31237266 | wangsheng | 2021-11-10T14:09:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:wangsheng/autonlp-data-poi_train",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- wangsheng/autonlp-data-poi_train
co2_eq_emissions: 390.39411176775826
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 31237266
- CO2 Emissions (in grams): 390.39411176775826
## Validation Metrics
- Loss: 0.1643059253692627
- Accuracy: 0.9379398019660155
- Precision: 0.7467491278147795
- Recall: 0.7158710854363028
- AUC: 0.9631629384458238
- F1: 0.7309841664079478
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/wangsheng/autonlp-poi_train-31237266
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("wangsheng/autonlp-poi_train-31237266", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("wangsheng/autonlp-poi_train-31237266", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
XSY/albert-base-v2-scarcasm-discriminator | XSY | 2021-11-10T12:56:20Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: albert-base-v2-scarcasm-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-scarcasm-discriminator
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2379
- Accuracy: 0.8996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2111 | 1.0 | 2179 | 0.2379 | 0.8996 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
LHF/FinEAS | LHF | 2021-11-10T11:16:21Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"finance",
"sentiment analysis",
"regression",
"sentence bert",
"en",
"dataset:RavenPack",
"arxiv:2111.00526",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- en
license: apache-2.0
tags:
- "finance"
- "sentiment analysis"
- "regression"
- "sentence bert"
datasets:
- "RavenPack"
metrics:
- "rmse"
---
# FinEAS: Financial Embedding Analysis of Sentiment
SentenceBERT for Financial News Sentiment Regression
**DISCLAIMER:** This model has been successfully tested with a test set of the same distribution. However, it is **not** a production-ready model as it probably needs to be updated continuously. Furthermore, the model should have been trained with more than two years of historical data. Additionally, it would need a supplementary assessment on bias, security and consistency.
## Introduction
Analyzing the sentiment of financial news is a complex task that requires a large understanding of the financial slang, as well as the knowledge of the context of each one of the companies, and the interactions of the whole economy as an ecosystem.
The [FinBERT](https://huggingface.co/ProsusAI/finbert) model binary classifies the sentiment being positive or negative. However, the idea of binary classification is too simple and does not comply with the reality.
RavenPack has an excellent hand-labelled large dataset with a continuous sentiment label variable that goes from -1 to 1. We have collected data from two previous years and tested it with data from the next two weeks. Additionally we have cut the dataset taking only both one year and six months subsamples to see how the model scales with more data, and to know whether more data helps the model or not.
In this repository you can find the different models by changing the branch name. The main branch is the one with the model trained on the whole dataset. We also uploaded the FinBERT regressor to the Hub: https://huggingface.co/LHF/finbert-regressor
**Note that the predictions of this HF model will go from 0 to 1 being 0.5 neutral, 1 positive and 0 negative.**
## Evaluation
| Dates | FinEAS | FinBERT |
|-----------|--------|---------|
| 6 months | 0.0044 | 0.0050 |
| 12 months | 0.0036 | 0.0034 |
| 24 months | 0.0033 | 0.0040 |
*Evaluated with the next two weeks.
## Code
You can find the code for this model in the following link: https://github.com/lhf-labs/finance-news-analysis-bert
## Citation
```
@misc{gutierrezfandino2021fineas,
title={FinEAS: Financial Embedding Analysis of Sentiment},
author={Asier Gutiérrez-Fandiño and Miquel Noguer i Alonso and Petter Kolm and Jordi Armengol-Estapé},
year={2021},
eprint={2111.00526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
LHF/finbert-regressor | LHF | 2021-11-10T11:16:14Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"finance",
"sentiment analysis",
"regression",
"finbert",
"en",
"dataset:RavenPack",
"arxiv:2111.00526",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language:
- en
license: apache-2.0
tags:
- "finance"
- "sentiment analysis"
- "regression"
- "finbert"
datasets:
- "RavenPack"
metrics:
- "rmse"
---
# FinBERT for Financial News Sentiment Regression
**DISCLAIMER:** This model has been successfully tested with a test set of the same distribution. However, it is **not** a production-ready model as it probably needs to be updated continuously. Furthermore, the model should have been trained with more than two years of historical data. Additionally, it would need a supplementary assessment on bias, security and consistency.
## Introduction
Analyzing the sentiment of financial news is a complex task that requires a large understanding of the financial slang, as well as the knowledge of the context of each one of the companies, and the interactions of the whole economy as an ecosystem.
The [FinBERT](https://huggingface.co/ProsusAI/finbert) model binary classifies the sentiment being positive or negative. However, the idea of binary classification is too simple and does not comply with the reality.
RavenPack has an excellent hand-labelled large dataset with a continuous sentiment label variable that goes from -1 to 1. We have collected data from two previous years and tested it with data from the next two weeks. Additionally we have cut the dataset taking only both one year and six months subsamples to see how the model scales with more data, and to know whether more data helps the model or not.
In this repository you can find the different models by changing the branch name. The main branch is the one with the model trained on the whole dataset. We also uploaded the best regressor FinEAS to the Hub: https://huggingface.co/LHF/FinEAS
**Note that the predictions of this HF model will go from 0 to 1 being 0.5 neutral, 1 positive and 0 negative.**
## Evaluation
| Dates | FinEAS | FinBERT |
|-----------|--------|---------|
| 6 months | 0.0044 | 0.0050 |
| 12 months | 0.0036 | 0.0034 |
| 24 months | 0.0033 | 0.0040 |
*Evaluated with the next two weeks.
## Code
You can find the code for this model in the following link: https://github.com/lhf-labs/finance-news-analysis-bert
## Citation
```
@misc{gutierrezfandino2021fineas,
title={FinEAS: Financial Embedding Analysis of Sentiment},
author={Asier Gutiérrez-Fandiño and Miquel Noguer i Alonso and Petter Kolm and Jordi Armengol-Estapé},
year={2021},
eprint={2111.00526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Zayt/viRoberta-l6-h384-word-cased | Zayt | 2021-11-10T09:54:45Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | More information: [github](https://github.com/TanHM-1211/viRoberta-l6-h384-cased)
```python
from underthesea import word_tokenize
from transformers import RobertaTokenizer, RobertaModel
model_name = 'Zayt/viRoberta-l6-h384-word-cased'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
text = word_tokenize("Xin chào, tôi không còn là sinh viên đại học Bách Khoa.", format='text')
output = model(**tokenizer(text, return_tensors='pt))
output
``` |
nateraw/huggingpics-package-demo-2 | nateraw | 2021-11-09T21:00:52Z | 68 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- image-classification
- huggingpics
- generated_from_trainer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingpics-package-demo-2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3761
- Acc: 0.9403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0328 | 1.0 | 24 | 0.9442 | 0.7463 |
| 0.8742 | 2.0 | 48 | 0.7099 | 0.9403 |
| 0.6451 | 3.0 | 72 | 0.5050 | 0.9403 |
| 0.508 | 4.0 | 96 | 0.3761 | 0.9403 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
shtoshni/longformer_coreference_ontonotes | shtoshni | 2021-11-09T19:31:06Z | 16 | 1 | transformers | [
"transformers",
"pytorch",
"longformer",
"feature-extraction",
"arxiv:2109.09667",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | Longformer-large model finetuned for the coreference resolution task. The model is fine-tuned over the OntoNotes data. The model is released as part of [this paper](https://arxiv.org/pdf/2109.09667.pdf). Note that the document encoder is to be used with the rest of the model parameters to perform the coreference resolution task. For demo purposes, please check this [Colab notebook](https://colab.research.google.com/drive/11ejXc1wDqzUxpgRH1nLvqEifAX30Z71_?usp=sharing). |
d42kw01f/Tamil-RoBERTa | d42kw01f | 2021-11-09T16:04:44Z | 19 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | # Description:
This is a smaller per-trained model on Tamil Language using Masked Language Modeling(MLM). And the model is trained on Oscar Tamil dataset.
# How to Use:
The model can be used directly with a pipeline for masked language modeling:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("d42kw01f/Tamil-RoBERTa")
>>> model = AutoModelForMaskedLM.from_pretrained("d42kw01f/Tamil-RoBERTa")
>>> fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> fill_mask("நான் வீட்டு <mask>.")
``` |
tiennvcs/layoutlmv2-large-uncased-finetuned-infovqa | tiennvcs | 2021-11-09T13:42:04Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | document-question-answering | 2022-03-02T23:29:05Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-large-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-large-uncased-finetuned-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-large-uncased](https://huggingface.co/microsoft/layoutlmv2-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1829 | 0.08 | 500 | 3.6339 |
| 3.5002 | 0.16 | 1000 | 3.0721 |
| 2.9556 | 0.24 | 1500 | 2.8731 |
| 2.8939 | 0.33 | 2000 | 3.1566 |
| 2.6986 | 0.41 | 2500 | 3.1023 |
| 2.7569 | 0.49 | 3000 | 2.7743 |
| 2.6391 | 0.57 | 3500 | 2.5023 |
| 2.4277 | 0.65 | 4000 | 2.5465 |
| 2.4242 | 0.73 | 4500 | 2.4709 |
| 2.3978 | 0.82 | 5000 | 2.4019 |
| 2.2653 | 0.9 | 5500 | 2.3383 |
| 2.3916 | 0.98 | 6000 | 2.4765 |
| 1.9423 | 1.06 | 6500 | 2.3798 |
| 1.8538 | 1.14 | 7000 | 2.3628 |
| 1.8136 | 1.22 | 7500 | 2.3671 |
| 1.7808 | 1.31 | 8000 | 2.5585 |
| 1.7772 | 1.39 | 8500 | 2.5862 |
| 1.755 | 1.47 | 9000 | 2.3105 |
| 1.6529 | 1.55 | 9500 | 2.2417 |
| 1.6956 | 1.63 | 10000 | 2.1755 |
| 1.5713 | 1.71 | 10500 | 2.2917 |
| 1.565 | 1.79 | 11000 | 2.0838 |
| 1.615 | 1.88 | 11500 | 2.2111 |
| 1.5249 | 1.96 | 12000 | 2.2207 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.0+cu101
- Datasets 1.15.1
- Tokenizers 0.10.3
|
pourzare/wav2vec2-base-timit-demo-colab | pourzare | 2021-11-09T09:53:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3821
- Wer: 0.3841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7018 | 2.01 | 500 | 1.9216 | 0.9924 |
| 1.0211 | 4.02 | 1000 | 0.5051 | 0.5095 |
| 0.4293 | 6.02 | 1500 | 0.4209 | 0.4282 |
| 0.2513 | 8.03 | 2000 | 0.3821 | 0.3841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
lucasresck/bert-base-cased-ag-news | lucasresck | 2021-11-09T02:11:29Z | 28 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"en",
"dataset:ag_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: mit
tags:
- bert
- classification
datasets:
- ag_news
metrics:
- accuracy
- f1
- recall
- precision
widget:
- text: "Is it soccer or football?"
example_title: "Sports"
- text: "A new version of Ubuntu was released."
example_title: "Sci/Tech"
---
# bert-base-cased-ag-news
BERT model fine-tuned on AG News classification dataset using a linear layer on top of the [CLS] token output, with 0.945 test accuracy.
### How to use
Here is how to use this model to classify a given text:
```python
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('lucasresck/bert-base-cased-ag-news')
model = BertForSequenceClassification.from_pretrained('lucasresck/bert-base-cased-ag-news')
text = "Is it soccer or football?"
encoded_input = tokenizer(text, return_tensors='pt', truncation=True, max_length=512)
output = model(**encoded_input)
```
### Limitations and bias
Bias were not assessed in this model, but, considering that pre-trained BERT is known to carry bias, it is also expected for this model. BERT's authors say: "This bias will also affect all fine-tuned versions of this model."
## Evaluation results
```
precision recall f1-score support
0 0.9539 0.9584 0.9562 1900
1 0.9884 0.9879 0.9882 1900
2 0.9251 0.9095 0.9172 1900
3 0.9127 0.9242 0.9184 1900
accuracy 0.9450 7600
macro avg 0.9450 0.9450 0.9450 7600
weighted avg 0.9450 0.9450 0.9450 7600
```
|
hakurei/lit-6B | hakurei | 2021-11-08T23:02:41Z | 45,067 | 67 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- pytorch
- causal-lm
license: mit
---
# Lit-6B - A Large Fine-tuned Model For Fictional Storytelling
Lit-6B is a GPT-J 6B model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text.
## Model Description
The model used for fine-tuning is [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax), which is a 6 billion parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/).
## Training Data & Annotative Prompting
The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations.
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ]
***
When a traveler in north central Massachusetts takes the wrong fork...
```
The annotations can be mixed and matched to help generate towards a specific style.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('hakurei/lit-6B')
tokenizer = AutoTokenizer.from_pretrained('hakurei/lit-6B')
prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
An example output from this code produces a result that will look similar to:
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler comes to an unknown region, his thoughts turn inevitably towards the old gods and legends which cluster around its appearance. It is not that he believes in them or suspects their reality—but merely because they are present somewhere else in creation just as truly as himself, and so belong of necessity in any landscape whose features cannot be altogether strange to him. Moreover, man has been prone from ancient times to brood over those things most connected with the places where he dwells. Thus the Olympian deities who ruled Hyper
```
## Team members and Acknowledgements
This project would not have been possible without the computational resources graciously provided by the [TPU Research Cloud](https://sites.research.google/trc/)
- [Anthony Mercurio](https://github.com/harubaru)
- Imperishable_NEET |
LACAI/roberta-large-dialog-narrative | LACAI | 2021-11-08T22:20:03Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: output_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_mlm
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.5832 | 0.19 | 15000 | 1.4992 |
| 1.5325 | 0.39 | 30000 | 1.4653 |
| 1.4979 | 0.58 | 45000 | 1.4359 |
| 1.4715 | 0.77 | 60000 | 1.4039 |
| 1.4448 | 0.97 | 75000 | 1.3877 |
| 1.4191 | 1.16 | 90000 | 1.3603 |
| 1.3988 | 1.35 | 105000 | 1.3425 |
| 1.3699 | 1.54 | 120000 | 1.3230 |
| 1.3493 | 1.74 | 135000 | 1.3012 |
| 1.3201 | 1.93 | 150000 | 1.2773 |
| 1.2993 | 2.12 | 165000 | 1.2617 |
| 1.2745 | 2.32 | 180000 | 1.2490 |
| 1.2614 | 2.51 | 195000 | 1.2283 |
| 1.2424 | 2.7 | 210000 | 1.2152 |
| 1.2296 | 2.9 | 225000 | 1.2052 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
abhishek/autonlp-toxic-new-30516963 | abhishek | 2021-11-08T19:31:37Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:abhishek/autonlp-data-toxic-new",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-toxic-new
co2_eq_emissions: 30.684995819386277
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 30516963
- CO2 Emissions (in grams): 30.684995819386277
## Validation Metrics
- Loss: 0.08340361714363098
- Accuracy: 0.9688222161294113
- Precision: 0.9102096627164995
- Recall: 0.7692604006163328
- AUC: 0.9859340458715813
- F1: 0.8338204592901879
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-toxic-new-30516963
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
fbaigt/procbert | fbaigt | 2021-11-08T15:08:01Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:2109.04711",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
- pubmed
- chemical patent
- cooking recipe
---
## ProcBERT
ProcBERT is a pre-trained language model specifically for procedural text. It was pre-trained on a large-scale procedural corpus (PubMed articles/chemical patents/cooking recipes) containing over 12B tokens and shows great performance on downstream tasks. More details can be found in the following [paper](https://arxiv.org/abs/2109.04711):
```
@inproceedings{bai-etal-2021-pre,
title = "Pre-train or Annotate? Domain Adaptation with a Constrained Budget",
author = "Bai, Fan and
Ritter, Alan and
Xu, Wei",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
}
```
## Usage
```
from transformers import *
tokenizer = AutoTokenizer.from_pretrained("fbaigt/procbert")
model = AutoModelForTokenClassification.from_pretrained("fbaigt/procbert")
```
More usage details can be found [here](https://github.com/bflashcp3f/ProcBERT). |
DeepPavlov/bert-base-cased-conversational | DeepPavlov | 2021-11-08T13:07:31Z | 562 | 8 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"en",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
language: en
---
# bert-base-cased-conversational
Conversational BERT \(English, cased, 12‑layer, 768‑hidden, 12‑heads, 110M parameters\) was trained on the English part of Twitter, Reddit, DailyDialogues\[1\], OpenSubtitles\[2\], Debates\[3\], Blogs\[4\], Facebook News Comments. We used this training data to build the vocabulary of English subtokens and took English cased version of BERT‑base as an initialization for English Conversational BERT.
08.11.2021: upload model with MLM and NSP heads
\[1\]: Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. IJCNLP 2017.
\[2\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[3\]: Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil. Proceedings of NAACL, 2016.
\[4\]: J. Schler, M. Koppel, S. Argamon and J. Pennebaker \(2006\). Effects of Age and Gender on Blogging in Proceedings of 2006 AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs.
|
DeepPavlov/rubert-base-cased-conversational | DeepPavlov | 2021-11-08T13:06:54Z | 2,983 | 19 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
language:
- ru
---
# rubert-base-cased-conversational
Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary for Conversational RuBERT model on this data and initialized the model with [RuBERT](../rubert-base-cased).
08.11.2021: upload model with MLM and NSP heads
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
|
DeepPavlov/bert-base-bg-cs-pl-ru-cased | DeepPavlov | 2021-11-08T12:58:09Z | 2,863 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"bg",
"cs",
"pl",
"ru",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:04Z | ---
language:
- bg
- cs
- pl
- ru
---
# bert-base-bg-cs-pl-ru-cased
SlavicBERT\[1\] \(Slavic \(bg, cs, pl, ru\), cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on Russian News and four Wikipedias: Bulgarian, Czech, Polish, and Russian. Subtoken vocabulary was built using this data. Multilingual BERT was used as an initialization for SlavicBERT.
08.11.2021: upload model with MLM and NSP heads
\[1\]: Arkhipov M., Trofimova M., Kuratov Y., Sorokin A. \(2019\). [Tuning Multilingual Transformers for Language-Specific Named Entity Recognition](https://www.aclweb.org/anthology/W19-3712/). ACL anthology W19-3712.
|
CLTL/icf-levels-stm | CLTL | 2021-11-08T12:26:53Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Emotional Functioning Levels (ICF b152)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with emotional functioning: emotions are appropriate, well regulated, etc.
3 | Slight problem with emotional functioning: irritable, gloomy, etc.
2 | Moderate problem with emotional functioning: negative emotions, such as fear, anger, sadness, etc.
1 | Severe problem with emotional functioning: intense negative emotions, such as fear, anger, sadness, etc.
0 | Flat affect, apathy, unstable, inappropriate emotions.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-stm',
use_cuda=False,
)
example = 'Naarmate het somatische beeld een herstellende trend laat zien, valt op dat patient zich depressief en suicidaal uit.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.60
```
The raw outputs look like this:
```
[[1.60418844]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.76 | 0.68
mean squared error | 1.03 | 0.87
root mean squared error | 1.01 | 0.93
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CLTL/icf-levels-fac | CLTL | 2021-11-08T12:13:55Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Walking Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | Patient can walk independently anywhere: level surface, uneven surface, slopes, stairs.
4 | Patient can walk independently on level surface but requires help on stairs, inclines, uneven surface; or, patient can walk independently, but the walking is not fully normal.
3 | Patient requires verbal supervision for walking, without physical contact.
2 | Patient needs continuous or intermittent support of one person to help with balance and coordination.
1 | Patient needs firm continuous support from one person who helps carrying weight and with balance.
0 | Patient cannot walk or needs help from two or more people; or, patient walks on a treadmill.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-fac',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
4.2
```
The raw outputs look like this:
```
[[4.20903111]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.70 | 0.66
mean squared error | 0.91 | 0.93
root mean squared error | 0.95 | 0.96
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CLTL/icf-levels-enr | CLTL | 2021-11-08T10:45:45Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"nl",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Energy Levels (ICF b1300)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with the energy level.
3 | Slight fatigue that causes mild limitations.
2 | Moderate fatigue; the patient gets easily tired from light activities or needs a long time to recover after an activity.
1 | Severe fatigue; the patient is capable of very little.
0 | Very severe fatigue; unable to do anything and mostly lays in bed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-enr',
use_cuda=False,
)
example = 'Al jaren extreme vermoeidheid overdag, valt overdag in slaap tijdens school- en werkactiviteiten en soms zelfs tijdens een gesprek.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.98
```
The raw outputs look like this:
```
[[1.97520316]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.43
mean squared error | 0.49 | 0.42
root mean squared error | 0.70 | 0.65
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
tkesonia/xlm-roberta-base-finetuned-marc-en | tkesonia | 2021-11-08T08:53:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9211
- Mae: 0.5122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1436 | 1.0 | 235 | 1.0181 | 0.5366 |
| 0.9756 | 2.0 | 470 | 0.9211 | 0.5122 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
wangfan/jdt-fin-roberta-wwm-large | wangfan | 2021-11-08T07:03:09Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"roberta-wwm",
"zh",
"dataset:finance",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: zh
tags:
- roberta-wwm
license: apache-2.0
datasets:
- finance
---
在众多业务中,越来越频繁的使用预训练语言模型(Pre-trained Language Models),为了在金融场景下各任务中取得更好效果,我们发布了jdt-fin-roberta-wwm模型
## 模型
* `base模型`:12-layer, 768-hidden, 12-heads, 110M parameters
| 模型简称 | 语料 | 京盘下载 |
| - | - | - |
| fin-roberta-wwm | 金融语料 | - |
## 快速加载
### 使用Huggingface-Transformers
依托于[Huggingface-Transformers](https://github.com/huggingface/transformers),可轻松调用以上模型。
```
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
model = BertModel.from_pretrained("MODEL_NAME")
```
**注意:本目录中的所有模型均使用BertTokenizer以及BertModel加载,请勿使用RobertaTokenizer/RobertaModel!**
其中`MODEL_NAME`对应列表如下:
| 模型名 | MODEL_NAME |
| - | - |
| fin-roberta-wwm | wangfan/jdt-fin-roberta-wwm |
|
tftransformers/bert-large-uncased-whole-word-masking | tftransformers | 2021-11-08T05:10:44Z | 6 | 0 | transformers | [
"transformers",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import BertModel
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking')
model = BertModel.from_pretrained("bert-large-uncased-whole-word-masking")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
tftransformers/bert-large-cased-whole-word-masking | tftransformers | 2021-11-08T03:50:16Z | 4 | 0 | transformers | [
"transformers",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import BertModel
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
model = BertModel.from_pretrained("bert-large-cased-whole-word-masking")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Subsets and Splits