modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Go2Heart/BERT_Mod_3 | c4c69c42a030315a7a18a5a63313d3417a8898d5 | 2022-07-29T09:11:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Go2Heart | null | Go2Heart/BERT_Mod_3 | 6 | null | transformers | 15,900 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: BERT_Mod_3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8198675496688742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_Mod_3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6760
- Accuracy: 0.8199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5167 | 1.0 | 24544 | 0.4953 | 0.8077 |
| 0.414 | 2.0 | 49088 | 0.4802 | 0.8148 |
| 0.2933 | 3.0 | 73632 | 0.5783 | 0.8186 |
| 0.2236 | 4.0 | 98176 | 0.6760 | 0.8199 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mrgiraffe/vit-base-beans-demo-v5 | 19bb62ec74456f2de0a5b61f8c04e5c463a9c66f | 2022-07-29T21:56:18.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | mrgiraffe | null | mrgiraffe/vit-base-beans-demo-v5 | 6 | null | transformers | 15,901 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1661
- Accuracy: 0.9576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2739 | 0.33 | 100 | 0.2454 | 0.9207 |
| 0.2006 | 0.66 | 200 | 0.2202 | 0.9280 |
| 0.2224 | 0.98 | 300 | 0.2020 | 0.9373 |
| 0.2062 | 1.31 | 400 | 0.1861 | 0.9428 |
| 0.0706 | 1.64 | 500 | 0.1796 | 0.9483 |
| 0.0591 | 1.97 | 600 | 0.1950 | 0.9410 |
| 0.0765 | 2.3 | 700 | 0.2274 | 0.9428 |
| 0.078 | 2.62 | 800 | 0.1661 | 0.9576 |
| 0.0705 | 2.95 | 900 | 0.1665 | 0.9502 |
| 0.0064 | 3.28 | 1000 | 0.1821 | 0.9502 |
| 0.0064 | 3.61 | 1100 | 0.1770 | 0.9576 |
| 0.0061 | 3.93 | 1200 | 0.1804 | 0.9520 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
09panesara/distilbert-base-uncased-finetuned-cola | f89a85cb8703676115912fffa55842f23eb981ab | 2021-12-21T14:03:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | 09panesara | null | 09panesara/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 15,902 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5406394412669151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7580
- Matthews Correlation: 0.5406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5307 | 1.0 | 535 | 0.5094 | 0.4152 |
| 0.3545 | 2.0 | 1070 | 0.5230 | 0.4940 |
| 0.2371 | 3.0 | 1605 | 0.6412 | 0.5087 |
| 0.1777 | 4.0 | 2140 | 0.7580 | 0.5406 |
| 0.1288 | 5.0 | 2675 | 0.8494 | 0.5396 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
18811449050/bert_finetuning_test | a6ebb204ba37e1c95e3922f6055be813217329f4 | 2021-05-18T17:05:20.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | 18811449050 | null | 18811449050/bert_finetuning_test | 5 | null | transformers | 15,903 | Entry not found |
2umm3r/distilbert-base-uncased-finetuned-cola | b075a1f7267831d787bf993c99fcf854e7012e96 | 2021-10-23T11:46:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | 2umm3r | null | 2umm3r/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 15,904 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5155709926752544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7816
- Matthews Correlation: 0.5156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5291 | 1.0 | 535 | 0.5027 | 0.4092 |
| 0.3492 | 2.0 | 1070 | 0.5136 | 0.4939 |
| 0.2416 | 3.0 | 1605 | 0.6390 | 0.5056 |
| 0.1794 | 4.0 | 2140 | 0.7816 | 0.5156 |
| 0.1302 | 5.0 | 2675 | 0.8836 | 0.5156 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
ASCCCCCCCC/PENGMENGJIE-finetuned-emotion | db44886a0596deadd82e6f8f82c87d2123da59fc | 2022-02-08T03:32:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/PENGMENGJIE-finetuned-emotion | 5 | null | transformers | 15,905 | ---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: PENGMENGJIE-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | 2689640b989d6fb96b5e64afaad6fc428c76cfc1 | 2022-02-14T08:54:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | 5 | null | transformers | 15,906 | ---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Ajay191191/autonlp-Test-530014983 | 9b8f7775d2be4452bb72308398b2a0794a7a185b | 2022-01-25T22:28:49.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Ajay191191/autonlp-data-Test",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Ajay191191 | null | Ajay191191/autonlp-Test-530014983 | 5 | null | transformers | 15,907 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Ajay191191/autonlp-data-Test
co2_eq_emissions: 55.10196329868386
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.9296904373981703
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Ajay191191/autonlp-Test-530014983
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Alireza1044/albert-base-v2-rte | 42fcb4ff92c3189e5b0193aad2ccd3b62c9e7155 | 2021-07-26T12:02:09.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | Alireza1044 | null | Alireza1044/albert-base-v2-rte | 5 | null | transformers | 15,908 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metric:
name: Accuracy
type: accuracy
value: 0.6859205776173285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7994
- Accuracy: 0.6859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Anamika/autonlp-Feedback1-479512837 | a5bb03ff52dd6e41f84962bfc14b4e1424e7bc40 | 2022-01-06T10:05:22.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:Anamika/autonlp-data-Feedback1",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Anamika | null | Anamika/autonlp-Feedback1-479512837 | 5 | null | transformers | 15,909 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Anamika/autonlp-data-Feedback1
co2_eq_emissions: 123.88023112815048
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 479512837
- CO2 Emissions (in grams): 123.88023112815048
## Validation Metrics
- Loss: 0.6220805048942566
- Accuracy: 0.7961119332705503
- Macro F1: 0.7616345204219084
- Micro F1: 0.7961119332705503
- Weighted F1: 0.795387503907883
- Macro Precision: 0.782839455262034
- Micro Precision: 0.7961119332705503
- Weighted Precision: 0.7992606754484262
- Macro Recall: 0.7451485972167191
- Micro Recall: 0.7961119332705503
- Weighted Recall: 0.7961119332705503
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-Feedback1-479512837
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana | a9850922cbc708d3b9047843e5803ae728d8c81c | 2022-03-24T11:56:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | AndrewMcDowell | null | AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana | 5 | null | transformers | 15,910 | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- ja
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER
type: wer
value: 95.33
- name: Test CER
type: cer
value: 22.27
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: de
metrics:
- name: Test WER
type: wer
value: 100.0
- name: Test CER
type: cer
value: 30.33
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test CER
type: cer
value: 29.63
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 32.69
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5500
- Wer: 1.0132
- Cer: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.7019 | 12.65 | 1000 | 1.0510 | 0.9832 | 0.2589 |
| 1.6385 | 25.31 | 2000 | 0.6670 | 0.9915 | 0.1851 |
| 1.4344 | 37.97 | 3000 | 0.6183 | 1.0213 | 0.1797 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` |
Andrija/SRoBERTa-NLP | 3d92ea3db1161f2cd8d590a2cefd9290a5f72d1c | 2021-07-08T13:25:21.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Andrija | null | Andrija/SRoBERTa-NLP | 5 | null | transformers | 15,911 | Entry not found |
AnonymousSub/EManuals_RoBERTa_wikiqa | 6757d2592fec9144739e5455a80151594d885020 | 2022-01-22T23:10:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/EManuals_RoBERTa_wikiqa | 5 | null | transformers | 15,912 | Entry not found |
AnonymousSub/consert-emanuals-s10-SR | 467e11313fcac69136a6fc5fc3172cac64c575bf | 2021-10-17T16:35:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/consert-emanuals-s10-SR | 5 | null | transformers | 15,913 | Entry not found |
AnonymousSub/roberta-base_wikiqa | 102424b0197c7fab300e126221a347012ce66b9c | 2022-01-22T22:11:03.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/roberta-base_wikiqa | 5 | null | transformers | 15,914 | Entry not found |
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_10 | dad570c5d76097cb9f94a9784eb7c45e4b37c101 | 2022-01-04T08:15:56.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_triplet_epochs_1_shard_10 | 5 | null | transformers | 15,915 | Entry not found |
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa | b2f3f0a8c725329a05882384024fad6dac2936b2 | 2022-01-22T22:39:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,916 | Entry not found |
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa | 5e78c9a769288e97f303925b889707f639453276 | 2022-01-23T00:44:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,917 | Entry not found |
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_wikiqa | 954ad78515efa604be06e32217d7937dee394688 | 2022-01-22T21:40:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,918 | Entry not found |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_wikiqa | 968e6e34ce385a290a40876a0e7b7f777d9e8db7 | 2022-01-23T03:36:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,919 | Entry not found |
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_squad2.0 | 7cf90cea424b60f9875d2b40ec07bb4b6f01c99a | 2022-01-17T22:27:43.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_squad2.0 | 5 | null | transformers | 15,920 | Entry not found |
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa | 33043e5f480079481acdea4c8fe5b60f21cd2758 | 2022-01-23T02:37:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,921 | Entry not found |
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1_wikiqa | f91f1498619d1f47311e32b7122ff0607762da16 | 2022-01-23T05:33:52.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,922 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_wikiqa | 9ef7aa52db6ed5b97c9d8b886976ebfa6cf4e213 | 2022-01-23T01:38:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,923 | Entry not found |
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1_wikiqa | fcd63acfdc44e8170d3dc15fdd79e6020a6a6bb6 | 2022-01-23T09:47:20.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,924 | Entry not found |
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_wikiqa | 053c6695d0c5df6aa497cb2cefb1d09de7afaf6e | 2022-01-23T08:16:11.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,925 | Entry not found |
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa | 5ef0073ffe2b347a87b6dae3d0d14c6c44696e5d | 2022-01-23T09:16:59.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,926 | Entry not found |
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa | 90ce307bf3fcbc2081225b3c2b3d09edbf191d8c | 2022-01-23T08:48:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,927 | Entry not found |
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1_wikiqa | bd0c5e28ffada3f5d0ffa1ee6bb8228eeee6d9aa | 2022-01-23T09:48:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,928 | Entry not found |
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa | 4656fca846385a48f004130029e0910908f19ecc | 2022-01-23T08:18:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,929 | Entry not found |
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1_wikiqa | d671cb7770fbc0e4039eafcc16d2c4f370e06850 | 2022-01-23T09:18:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1_wikiqa | 5 | null | transformers | 15,930 | Entry not found |
AnonymousSub/unsup-consert-base_copy_wikiqa | 102c9875dbd2f52366f337de1cfcf7e89e0b37c8 | 2022-01-23T05:49:08.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/unsup-consert-base_copy_wikiqa | 5 | null | transformers | 15,931 | Entry not found |
Apisate/Discord-Ai-Bot | 6693ee32482f85e012585b84a9b1fbb139ddddbf | 2021-12-05T14:19:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Apisate | null | Apisate/Discord-Ai-Bot | 5 | null | transformers | 15,932 | Entry not found |
AriakimTaiyo/DialoGPT-revised-Kumiko | 3f39f76e132e35de76fa8ae1623ee65a1c0f9030 | 2022-02-03T17:14:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | AriakimTaiyo | null | AriakimTaiyo/DialoGPT-revised-Kumiko | 5 | null | transformers | 15,933 | ---
tags:
- conversational
---
# Revised Kumiko DialoGPT Model |
Ateeb/EmotionDetector | bd319170fe51ec55c5e8f693d08c5f80e0e91481 | 2021-03-22T18:03:50.000Z | [
"pytorch",
"funnel",
"text-classification",
"transformers"
]
| text-classification | false | Ateeb | null | Ateeb/EmotionDetector | 5 | null | transformers | 15,934 | Entry not found |
Azaghast/GPT2-SCP-Descriptions | 7625054cf949be5b486ea9b809bce23fa6cb32ed | 2021-08-25T07:59:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Azaghast | null | Azaghast/GPT2-SCP-Descriptions | 5 | null | transformers | 15,935 | Entry not found |
Azuris/DialoGPT-medium-senorita | 3b5bd2937433887a4a5374449c9a84ee7cf1ab03 | 2021-12-15T10:31:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Azuris | null | Azuris/DialoGPT-medium-senorita | 5 | null | transformers | 15,936 | ---
tags:
- conversational
---
|
Bagus/wav2vec2-large-xlsr-bahasa-indonesia | cefb076a8a9f39a03b868d994c1554b53577ee83 | 2021-09-24T13:53:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"el",
"dataset:common-voice id 6.1",
"transformers",
"audio",
"speech",
"bahasa-indonesia",
"license:apache-2.0"
]
| automatic-speech-recognition | false | Bagus | null | Bagus/wav2vec2-large-xlsr-bahasa-indonesia | 5 | 1 | transformers | 15,937 | ---
language: el
datasets:
- common-voice id 6.1
tags:
- audio
- automatic-speech-recognition
- speech
- bahasa-indonesia
license: apache-2.0
---
Dataset used for training:
- Name: Common Voice
- Language: Indonesian [id]
- Version: 6.1
Test WER: 19.3 %
Contact:
[email protected] |
BigSalmon/BlankSlots | 088db1be7ae6cf852efa754db1a3349dd04392f7 | 2021-06-23T02:15:42.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | BigSalmon | null | BigSalmon/BlankSlots | 5 | null | transformers | 15,938 | Entry not found |
BumBelDumBel/TRUMP | 1d85e5d747e6a8ba7865cee35df04361a58e9f68 | 2021-07-16T19:14:17.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
]
| text-generation | false | BumBelDumBel | null | BumBelDumBel/TRUMP | 5 | null | transformers | 15,939 | ---
license: mit
tags:
- generated_from_trainer
model_index:
- name: TRUMP
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TRUMP
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
CLTL/icf-levels-adm | 1b64331a72511271b7316b65e593cdfebd178fad | 2021-11-08T10:10:01.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-adm | 5 | 1 | transformers | 15,940 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Respiration Functioning Levels (ICF b440)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with respiration, and/or respiratory rate is normal (EWS: 9-20).
3 | Shortness of breath in exercise (saturation ≥90), and/or respiratory rate is slightly increased (EWS: 21-30).
2 | Shortness of breath in rest (saturation ≥90), and/or respiratory rate is fairly increased (EWS: 31-35).
1 | Needs oxygen at rest or during exercise (saturation <90), and/or respiratory rate >35.
0 | Mechanical ventilation is needed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-adm',
use_cuda=False,
)
example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.26
```
The raw outputs look like this:
```
[[2.26074648]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.37
mean squared error | 0.55 | 0.34
root mean squared error | 0.74 | 0.58
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CLTL/icf-levels-ber | f958f5bb51191d2c5a79e494156e1f0e5e535700 | 2021-11-08T10:36:00.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-ber | 5 | 1 | transformers | 15,941 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Work and Employment Functioning Levels (ICF d840-d859)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing work and employment functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about work and employment functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Can work/study fully (like when healthy).
3 | Can work/study almost fully.
2 | Can work/study only for about 50\%, or can only work at home and cannot go to school / office.
1 | Work/study is severely limited.
0 | Cannot work/study.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-ber',
use_cuda=False,
)
example = 'Fysiek zwaar werk is niet mogelijk, maar administrative taken zou zij wel aan moeten kunnen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
2.41
```
The raw outputs look like this:
```
[[2.40793037]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 1.56 | 1.49
mean squared error | 3.06 | 2.85
root mean squared error | 1.75 | 1.69
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CLTL/icf-levels-enr | 790b6fcbaccf7d811f8638dc8ec10754d2aa296f | 2021-11-08T10:45:45.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-enr | 5 | 1 | transformers | 15,942 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Energy Levels (ICF b1300)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | No problem with the energy level.
3 | Slight fatigue that causes mild limitations.
2 | Moderate fatigue; the patient gets easily tired from light activities or needs a long time to recover after an activity.
1 | Severe fatigue; the patient is capable of very little.
0 | Very severe fatigue; unable to do anything and mostly lays in bed.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-enr',
use_cuda=False,
)
example = 'Al jaren extreme vermoeidheid overdag, valt overdag in slaap tijdens school- en werkactiviteiten en soms zelfs tijdens een gesprek.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.98
```
The raw outputs look like this:
```
[[1.97520316]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.48 | 0.43
mean squared error | 0.49 | 0.42
root mean squared error | 0.70 | 0.65
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CLTL/icf-levels-etn | 6127b30f6c229e77ae7fcf9c9ff068eece494534 | 2021-11-08T10:56:00.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-etn | 5 | 1 | transformers | 15,943 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Eating Functioning Levels (ICF d550)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Can eat independently (in culturally acceptable ways), good intake, eats according to her/his needs.
3 | Can eat independently but with adjustments, and/or somewhat reduced intake (>75% of her/his needs), and/or good intake can be achieved with proper advice.
2 | Reduced intake, and/or stimulus / feeding modules / nutrition drinks are needed (but not tube feeding / TPN).
1 | Intake is severely reduced (<50% of her/his needs), and/or tube feeding / TPN is needed.
0 | Cannot eat, and/or fully dependent on tube feeding / TPN.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-etn',
use_cuda=False,
)
example = 'Sondevoeding is geïndiceerd'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
0.89
```
The raw outputs look like this:
```
[[0.8872931]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.59 | 0.50
mean squared error | 0.65 | 0.47
root mean squared error | 0.81 | 0.68
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CLTL/icf-levels-ins | c56aea211596c66c0686934fff7cf21a9cbfd36e | 2021-11-08T12:13:06.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | false | CLTL | null | CLTL/icf-levels-ins | 5 | 1 | transformers | 15,944 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Exercise Tolerance Functioning Levels (ICF b455)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
5 | MET>6. Can tolerate jogging, hard exercises, running, climbing stairs fast, sports.
4 | 4≤MET≤6. Can tolerate walking / cycling at a brisk pace, considerable effort (e.g. cycling from 16 km/h), heavy housework.
3 | 3≤MET<4. Can tolerate walking / cycling at a normal pace, gardening, exercises without equipment.
2 | 2≤MET<3. Can tolerate walking at a slow to moderate pace, grocery shopping, light housework.
1 | 1≤MET<2. Can tolerate sitting activities.
0 | 0≤MET<1. Can physically tolerate only recumbent activities.
The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-ins',
use_cuda=False,
)
example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
3.13
```
The raw outputs look like this:
```
[[3.1300993]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.69 | 0.61
mean squared error | 0.80 | 0.64
root mean squared error | 0.89 | 0.80
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
Canadiancaleb/DialoGPT-small-walter | e337c02b9b652ebeeff01a548bb8691b65fd9b6e | 2021-09-19T01:29:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Canadiancaleb | null | Canadiancaleb/DialoGPT-small-walter | 5 | null | transformers | 15,945 | ---
tags:
- conversational
---
# Walter (Breaking Bad) DialoGPT Model |
Capreolus/birch-bert-large-mb | 7dc34e4ae449de499ef1f361a20189d6b29e6073 | 2021-05-18T17:40:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
]
| null | false | Capreolus | null | Capreolus/birch-bert-large-mb | 5 | null | transformers | 15,946 | Entry not found |
CenIA/albert-base-spanish-finetuned-xnli | 1b6647ea51d36c3863f0f74d655c56c7fcd9130a | 2021-12-08T22:09:10.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-base-spanish-finetuned-xnli | 5 | null | transformers | 15,947 | Entry not found |
CenIA/albert-large-spanish-finetuned-mldoc | c13e1d3fe54ffffa948389662d71db356f853bd1 | 2022-01-11T04:41:21.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-large-spanish-finetuned-mldoc | 5 | null | transformers | 15,948 | Entry not found |
CenIA/albert-tiny-spanish-finetuned-mldoc | a9187b8ed6682aebe4101e31f673c21ccbb4c520 | 2022-01-10T09:54:15.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-mldoc | 5 | null | transformers | 15,949 | Entry not found |
CenIA/albert-tiny-spanish-finetuned-xnli | b717b78426ea2ad45d2c9f36e08aa7244a293ada | 2021-12-08T21:37:29.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-xnli | 5 | null | transformers | 15,950 | Entry not found |
CenIA/albert-xxlarge-spanish-finetuned-xnli | fcaaddc3832ee0800a72a51e8de78b4e1c1a37b3 | 2021-12-28T17:37:30.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-xnli | 5 | null | transformers | 15,951 | Entry not found |
Chun/DialoGPT-small-dailydialog | 14ddd6509c050d7aac10b68afcbe07ca46f1efae | 2021-09-01T16:00:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Chun | null | Chun/DialoGPT-small-dailydialog | 5 | null | transformers | 15,952 | Entry not found |
Chun/w-en2zh-mtm | 78fca6f111b49fd8b2731912792feec54f23fac6 | 2021-08-24T17:32:39.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Chun | null | Chun/w-en2zh-mtm | 5 | null | transformers | 15,953 | Entry not found |
CleveGreen/JobClassifier_v2_gpt | ce8f12eb4f49b49fe22e013b68d1ec773b9d7313 | 2022-02-16T19:25:04.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | CleveGreen | null | CleveGreen/JobClassifier_v2_gpt | 5 | null | transformers | 15,954 | Entry not found |
CoffeeAddict93/gpt2-modest-proposal | ce93d49f030597ea3b83846818f9e6fcd9150a23 | 2021-12-02T03:53:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | CoffeeAddict93 | null | CoffeeAddict93/gpt2-modest-proposal | 5 | null | transformers | 15,955 | Entry not found |
Dandara/bertimbau-socioambiental | c8a338465f890a05bb424eff909680729e061b80 | 2021-09-22T12:27:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Dandara | null | Dandara/bertimbau-socioambiental | 5 | null | transformers | 15,956 | Entry not found |
DataikuNLP/paraphrase-MiniLM-L6-v2 | 134896fc4f79aea0609e9e01433fe91b6093cb38 | 2021-09-02T08:05:59.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | DataikuNLP | null | DataikuNLP/paraphrase-MiniLM-L6-v2 | 5 | null | sentence-transformers | 15,957 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# DataikuNLP/paraphrase-MiniLM-L6-v2
**This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2/) from sentence-transformers at the specific commit `c4dfcde8a3e3e17e85cd4f0ec1925a266187f48e`.**
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
Davlan/mbart50-large-eng-yor-mt | 6711a1bfb94345d2148d18a64ddc7c93bf04cc68 | 2021-09-26T11:57:50.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Davlan | null | Davlan/mbart50-large-eng-yor-mt | 5 | null | transformers | 15,958 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-eng-yor-mt
## Model description
**mbart50-large-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbarr50-large achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mt5-small-en-pcm | 0c7e84bb44636834c94474e446a17fc1b39a9192 | 2022-01-22T19:48:28.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Davlan | null | Davlan/mt5-small-en-pcm | 5 | null | transformers | 15,959 | Entry not found |
Davlan/mt5_base_eng_yor_mt | 219e2519924e9fa58ad654940ca4de820a6ef48d | 2021-05-21T10:14:10.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Davlan | null | Davlan/mt5_base_eng_yor_mt | 5 | null | transformers | 15,960 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_eng_yor_mt
## Model description
**mT5_base_yor_eng_mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for MT.
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("Davlan/mt5_base_eng_yor_mt")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-base")
input_string = "Where are you?"
inputs = tokenizer.encode(input_string, return_tensors="pt")
generated_tokens = model.generate(inputs)
results = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
9.82 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By David Adelani
```
```
|
DeadBeast/korscm-mBERT | 8e55fc3bca8bd3371b9efde468b8ef3f5c3d5f6d | 2021-08-21T17:40:01.000Z | [
"pytorch",
"bert",
"text-classification",
"korean",
"dataset:Korean-Sarcasm",
"transformers",
"license:apache-2.0"
]
| text-classification | false | DeadBeast | null | DeadBeast/korscm-mBERT | 5 | 1 | transformers | 15,961 | ---
language: korean
license: apache-2.0
datasets:
- Korean-Sarcasm
---
# **Korean-mBERT**
This model is a fine-tune checkpoint of mBERT-base-cased over **Hugging Face Kore_Scm** dataset for Text classification.
### **How to use?**
**Task**: binary-classification
- LABEL_1: Sarcasm (*Sarcasm means tweets contains sarcasm*)
- LABEL_0: Not Sarcasm (*Not Sarcasm means tweets do not contain sarcasm*)
Click on **Use in Transformers**!
|
Declan/CNN_model_v3 | b208012dc71b525f7b902ae0954999734b999be1 | 2021-12-15T11:50:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Declan | null | Declan/CNN_model_v3 | 5 | null | transformers | 15,962 | Entry not found |
Declan/CNN_model_v8 | 2a7d2b67ce226b8476f6be60909a9d15b53356cc | 2021-12-19T21:59:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Declan | null | Declan/CNN_model_v8 | 5 | null | transformers | 15,963 | Entry not found |
Declan/ChicagoTribune_model_v8 | c95c3191ab360a47ddfd77cacddc0b18bda3a353 | 2021-12-19T21:31:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Declan | null | Declan/ChicagoTribune_model_v8 | 5 | null | transformers | 15,964 | Entry not found |
Declan/Reuters_model_v8 | 57aaa8926884bc10da1d414bf8f47ed48a8d7c6f | 2021-12-20T01:41:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Declan | null | Declan/Reuters_model_v8 | 5 | null | transformers | 15,965 | Entry not found |
DeskDown/MarianMixFT_en-ja | 9c7057ee060e7178d49041522a17542d434f1bfa | 2022-01-14T20:14:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | DeskDown | null | DeskDown/MarianMixFT_en-ja | 5 | null | transformers | 15,966 | Entry not found |
Doogie/wav2vec2-korea-doogie-test-01 | 6fe25ad6906a1ec0c939ca2c17fabe53f5eb00e4 | 2021-12-09T03:58:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Doogie | null | Doogie/wav2vec2-korea-doogie-test-01 | 5 | null | transformers | 15,967 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-korea-doogie-test-01
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-korea-doogie-test-01
This model is a fine-tuned version of [Doogie/wav2vec2-korea-doogie-test-01](https://huggingface.co/Doogie/wav2vec2-korea-doogie-test-01) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4207
- Wer: 0.5938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1594 | 2.38 | 500 | 1.2825 | 0.6134 |
| 0.1272 | 4.76 | 1000 | 1.3252 | 0.6271 |
| 0.1291 | 7.14 | 1500 | 1.3236 | 0.6158 |
| 0.1192 | 9.52 | 2000 | 1.3589 | 0.6384 |
| 0.0981 | 11.9 | 2500 | 1.3778 | 0.6425 |
| 0.0946 | 14.29 | 3000 | 1.4500 | 0.6336 |
| 0.0854 | 16.67 | 3500 | 1.4169 | 0.6164 |
| 0.0766 | 19.05 | 4000 | 1.3665 | 0.6217 |
| 0.0676 | 21.43 | 4500 | 1.4593 | 0.6348 |
| 0.0631 | 23.81 | 5000 | 1.5267 | 0.6188 |
| 0.0627 | 26.19 | 5500 | 1.4988 | 0.6306 |
| 0.059 | 28.57 | 6000 | 1.4986 | 0.6265 |
| 0.0502 | 30.95 | 6500 | 1.4268 | 0.6158 |
| 0.0496 | 33.33 | 7000 | 1.3859 | 0.5998 |
| 0.0418 | 35.71 | 7500 | 1.4154 | 0.6057 |
| 0.0376 | 38.1 | 8000 | 1.4077 | 0.6116 |
| 0.0374 | 40.48 | 8500 | 1.4164 | 0.6087 |
| 0.0301 | 42.86 | 9000 | 1.4634 | 0.6152 |
| 0.0289 | 45.24 | 9500 | 1.4360 | 0.6045 |
| 0.0283 | 47.62 | 10000 | 1.4213 | 0.5998 |
| 0.0228 | 50.0 | 10500 | 1.4207 | 0.5938 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DrishtiSharma/wav2vec2-xls-r-sl-a1 | 27990ce12a7f41441665964eded0f1c0ad545383 | 2022-03-23T18:35:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-xls-r-sl-a1 | 5 | null | transformers | 15,968 | ---
language:
- sl
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- sl
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-sl-a1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sl
metrics:
- name: Test WER
type: wer
value: 0.20626555409164105
- name: Test CER
type: cer
value: 0.051648321634392154
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sl
metrics:
- name: Test WER
type: wer
value: 0.5406156320830592
- name: Test CER
type: cer
value: 0.22249723590310583
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sl
metrics:
- name: Test WER
type: wer
value: 55.24
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Wer: 0.2279
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3881 | 6.1 | 500 | 2.9710 | 1.0 |
| 2.6401 | 12.2 | 1000 | 1.7677 | 0.9734 |
| 1.5152 | 18.29 | 1500 | 0.5564 | 0.6011 |
| 1.2191 | 24.39 | 2000 | 0.4319 | 0.4390 |
| 1.0237 | 30.49 | 2500 | 0.3141 | 0.3175 |
| 0.8892 | 36.59 | 3000 | 0.2748 | 0.2689 |
| 0.8296 | 42.68 | 3500 | 0.2680 | 0.2534 |
| 0.7602 | 48.78 | 4000 | 0.2820 | 0.2506 |
| 0.7186 | 54.88 | 4500 | 0.2672 | 0.2398 |
| 0.6887 | 60.98 | 5000 | 0.2729 | 0.2402 |
| 0.6507 | 67.07 | 5500 | 0.2767 | 0.2361 |
| 0.6226 | 73.17 | 6000 | 0.2817 | 0.2332 |
| 0.6024 | 79.27 | 6500 | 0.2679 | 0.2279 |
| 0.5787 | 85.37 | 7000 | 0.2837 | 0.2316 |
| 0.5744 | 91.46 | 7500 | 0.2838 | 0.2284 |
| 0.5556 | 97.56 | 8000 | 0.2763 | 0.2281 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Duc/distilbert-base-uncased-finetuned-ner | 22909a95695c1fd6eeb8f67c431ec29d0ef520c8 | 2021-11-08T01:35:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Duc | null | Duc/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 15,969 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9261715296198055
- name: Recall
type: recall
value: 0.9374650408323079
- name: F1
type: f1
value: 0.9317840662700839
- name: Accuracy
type: accuracy
value: 0.9840659602522758
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9262
- Recall: 0.9375
- F1: 0.9318
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2424 | 1.0 | 878 | 0.0684 | 0.9096 | 0.9206 | 0.9150 | 0.9813 |
| 0.0524 | 2.0 | 1756 | 0.0607 | 0.9188 | 0.9349 | 0.9268 | 0.9835 |
| 0.0304 | 3.0 | 2634 | 0.0604 | 0.9262 | 0.9375 | 0.9318 | 0.9841 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian | 9da689cab0d240d78e4fd69a56a2394905aaba15 | 2022-07-17T17:37:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:Common Voice",
"arxiv:2204.00618",
"transformers",
"audio",
"speech",
"Russian-speech-corpus",
"PyTorch",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Edresson | null | Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian | 5 | 2 | transformers | 15,970 | ---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- Russian-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 19.46
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
EhsanAghazadeh/bert-based-uncased-sst2-e2 | 2dd84b894838919c5f824c131261595ed00d6095 | 2022-01-02T10:20:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-based-uncased-sst2-e2 | 5 | null | transformers | 15,971 | Entry not found |
EhsanAghazadeh/bert-based-uncased-sst2-e4 | 621a2703ff2f772f58a5219f8f6e0c77dd6083af | 2022-01-02T12:55:19.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-based-uncased-sst2-e4 | 5 | null | transformers | 15,972 | Entry not found |
EhsanAghazadeh/bert-based-uncased-sst2-e6 | 44e58a17a99666613371832ba3afe3c1b7599bf4 | 2022-01-02T15:29:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-based-uncased-sst2-e6 | 5 | null | transformers | 15,973 | Entry not found |
EhsanAghazadeh/bert-large-uncased-CoLA_A | b17f9b8a586a3d9d5b668503f90516802e94cbcd | 2021-05-18T18:26:14.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-large-uncased-CoLA_A | 5 | null | transformers | 15,974 | Entry not found |
EhsanAghazadeh/bert-large-uncased-CoLA_B | 0ef9683a32cbd466accd223e63b541761ca863ac | 2021-05-18T18:29:53.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-large-uncased-CoLA_B | 5 | null | transformers | 15,975 | Entry not found |
EhsanAghazadeh/xlm-roberta-base-lcc-en-2e-5-42 | 17a377960858c4144a8426e0a027fbf12794c23e | 2021-08-21T18:45:30.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/xlm-roberta-base-lcc-en-2e-5-42 | 5 | null | transformers | 15,976 | Entry not found |
EleutherAI/enformer-preview | 771e52f17e36e93b4ee0bb0af9b3d574bfa51843 | 2022-02-23T12:17:24.000Z | [
"pytorch",
"enformer",
"transformers",
"license:apache-2.0"
]
| null | false | EleutherAI | null | EleutherAI/enformer-preview | 5 | 2 | transformers | 15,977 | ---
license: apache-2.0
inference: false
---
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` |
Eyvaz/wav2vec2-base-russian-big-kaggle | 646686e1b7e9f0f21036d3ef8a6c49388af1432d | 2021-12-05T17:15:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Eyvaz | null | Eyvaz/wav2vec2-base-russian-big-kaggle | 5 | 1 | transformers | 15,978 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-russian-big-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-big-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Fan-s/reddit-tc-bert | 1ac96d442a0162b9574dea6c692be64b460b446b | 2022-02-22T05:25:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Fan-s | null | Fan-s/reddit-tc-bert | 5 | null | transformers | 15,979 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-base
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-base
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Accuracy: 0.9267
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 320
- eval_batch_size: 80
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
## Usage (HuggingFace Transformers)
You can use the model like this:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# label_list
label_list = ['matched', 'unmatched']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("Fan-s/reddit-tc-bert", use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained("Fan-s/reddit-tc-bert")
# Set the input
post = "don't make gravy with asbestos."
response = "i'd expect someone with a culinary background to know that. since we're talking about school dinner ladies, they need to learn this pronto."
# Predict whether the two sentences are matched
def predict(post, response, max_seq_length=128):
with torch.no_grad():
args = (post, response)
input = tokenizer(*args, padding="max_length", max_length=max_seq_length, truncation=True, return_tensors="pt")
output = model(**input)
logits = output.logits
item = torch.argmax(logits, dim=1)
predict_label = label_list[item]
return predict_label, logits
predict_label, logits = predict(post, response)
# Matched
print("predict_label:", predict_label)
``` |
GKLMIP/bert-khmer-base-uncased | e2e16a3778123f74e488a37e308d5c7572062be9 | 2021-07-31T03:07:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | GKLMIP | null | GKLMIP/bert-khmer-base-uncased | 5 | null | transformers | 15,980 | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` |
Gabriel/kb-finetune-atkins | 092fb0c64fca6656b864c93f0c8dc3c894ce8eb4 | 2021-08-20T15:05:53.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | Gabriel | null | Gabriel/kb-finetune-atkins | 5 | 0 | sentence-transformers | 15,981 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1526 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Geotrend/bert-base-en-fr-zh-cased | 61dac4f00651117d8928993ebd87c892fdce4037 | 2021-05-18T19:29:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-zh-cased | 5 | null | transformers | 15,982 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-ja-cased | 0387eb4ee59eb05e1da822f725ecf0780491dcd8 | 2021-05-18T19:34:23.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-en-ja-cased | 5 | null | transformers | 15,983 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-ja-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ja-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ja-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-pl-cased | 8832624e32e51ff17757c4fc1e40ef59d6f9dbf4 | 2021-05-18T19:41:44.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-en-pl-cased | 5 | null | transformers | 15,984 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-pl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-pl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-hi-cased | 2d57290ecd272cfafde343e524f370bf42975f61 | 2021-05-18T19:57:34.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"hi",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-hi-cased | 5 | null | transformers | 15,985 | ---
language: hi
datasets: wikipedia
license: apache-2.0
---
# bert-base-hi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-hi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-hi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/bert-base-no-cased | a622c2b94bd8d2e079014a5efb52e4603860ec8d | 2021-05-18T20:03:52.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"no",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-no-cased | 5 | null | transformers | 15,986 | ---
language: no
datasets: wikipedia
license: apache-2.0
---
# bert-base-no-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-no-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-no-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-uk-cased | 6b92230968e504e931da3d2ee7bfb57b0773264f | 2021-05-18T20:13:29.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"uk",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-uk-cased | 5 | 1 | transformers | 15,987 | ---
language: uk
datasets: wikipedia
license: apache-2.0
---
# bert-base-uk-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-uk-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-uk-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-ro-cased | ab2e629f923115854064ead0384f35c3b4468521 | 2021-07-29T11:21:02.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-ro-cased | 5 | null | transformers | 15,988 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-ro-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-ro-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Ghana-NLP/distilabena-base-asante-twi-uncased | c67fa4f62e8484ade005b933b6588b05b4fdf445 | 2020-10-22T06:19:21.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Ghana-NLP | null | Ghana-NLP/distilabena-base-asante-twi-uncased | 5 | null | transformers | 15,989 | Entry not found |
Hank/distilbert-base-uncased-finetuned-ner | a02bcf29183a2d44540ba7448c67fcb1757c4235 | 2021-08-02T01:04:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | Hank | null | Hank/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 15,990 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9839229828268226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9259
- Recall: 0.9369
- F1: 0.9314
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.243 | 1.0 | 878 | 0.0703 | 0.9134 | 0.9181 | 0.9158 | 0.9806 |
| 0.0515 | 2.0 | 1756 | 0.0609 | 0.9214 | 0.9343 | 0.9278 | 0.9832 |
| 0.0305 | 3.0 | 2634 | 0.0612 | 0.9259 | 0.9369 | 0.9314 | 0.9839 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Harveenchadha/hindi_base_wav2vec2 | c372b40c39a67efbe26dcf01859ad9997e6042c7 | 2022-03-23T18:28:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:Harveenchadha/indic-voice",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/hindi_base_wav2vec2 | 5 | null | transformers | 15,991 | ---
license: apache-2.0
language:
- hi
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- hi
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- Harveenchadha/indic-voice
model-index:
- name: Hindi Large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 22.62
- name: Test CER
type: cer
value: 7.42
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 19.47
- name: Test CER
type: cer
value: 8.05
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-8.0
type: mozilla-foundation/common_voice_8_0
args: hi
metrics:
- name: Test WER
type: wer
value: 20.87
- name: Test CER
type: cer
value: 9.47
---
# hindi_base_wav2vec2 |
Harveenchadha/vakyansh-wav2vec2-maithili-maim-50 | 873b2a797d115520cc2367d740de1c7aecba58bb | 2021-12-17T17:49:09.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-maithili-maim-50 | 5 | null | transformers | 15,992 | Entry not found |
Hax/filipino-text-version1 | 25064575d84d2e77e729da0a91b574dc36e04f53 | 2021-07-07T07:31:03.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | Hax | null | Hax/filipino-text-version1 | 5 | null | transformers | 15,993 | Entry not found |
Helsinki-NLP/opus-mt-af-eo | 08bdf46889e392c83c202a7215161f11fe6eab33 | 2021-01-18T07:46:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-af-eo | 5 | null | transformers | 15,994 | ---
language:
- af
- eo
tags:
- translation
license: apache-2.0
---
### afr-epo
* source group: Afrikaans
* target group: Esperanto
* OPUS readme: [afr-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.epo | 18.3 | 0.411 |
### System Info:
- hf_name: afr-epo
- source_languages: afr
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'eo']
- src_constituents: {'afr'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt
- src_alpha3: afr
- tgt_alpha3: epo
- short_pair: af-eo
- chrF2_score: 0.41100000000000003
- bleu: 18.3
- brevity_penalty: 0.995
- ref_len: 7517.0
- src_name: Afrikaans
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: af
- tgt_alpha2: eo
- prefer_old: False
- long_pair: afr-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-af-sv | 0e5d0db55e8abbd9ab57eb5d5113aef4e3dce5cb | 2021-09-09T21:26:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-af-sv | 5 | null | transformers | 15,995 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-af-sv
* source languages: af
* target languages: sv
* OPUS readme: [af-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.sv | 40.4 | 0.599 |
|
Helsinki-NLP/opus-mt-ase-fr | 92f2a406e52e16af2eb7fb4c6afd0f30de66c252 | 2021-09-09T21:26:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ase",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ase-fr | 5 | null | transformers | 15,996 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ase-fr
* source languages: ase
* target languages: fr
* OPUS readme: [ase-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.fr | 37.8 | 0.553 |
|
Helsinki-NLP/opus-mt-az-es | 0288d68759d9ff06174d2e74e57638eb6a27d2b9 | 2021-01-18T07:48:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"az",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-az-es | 5 | null | transformers | 15,997 | ---
language:
- az
- es
tags:
- translation
license: apache-2.0
---
### aze-spa
* source group: Azerbaijani
* target group: Spanish
* OPUS readme: [aze-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.spa | 11.8 | 0.346 |
### System Info:
- hf_name: aze-spa
- source_languages: aze
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'es']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: spa
- short_pair: az-es
- chrF2_score: 0.34600000000000003
- bleu: 11.8
- brevity_penalty: 1.0
- ref_len: 1144.0
- src_name: Azerbaijani
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: es
- prefer_old: False
- long_pair: aze-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bcl-sv | ca48856af4b9fc6cfb5bab6115f05be2657fc301 | 2021-09-09T21:26:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bcl",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bcl-sv | 5 | null | transformers | 15,998 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-sv
* source languages: bcl
* target languages: sv
* OPUS readme: [bcl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.sv | 38.0 | 0.565 |
|
Helsinki-NLP/opus-mt-ber-es | b0a3cfed0ac0f2820c9f04d561d8e21e8f8a1f16 | 2021-09-09T21:27:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ber",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ber-es | 5 | null | transformers | 15,999 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ber-es
* source languages: ber
* target languages: es
* OPUS readme: [ber-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.es | 33.8 | 0.487 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.