modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ymcnabb/finetuning-sentiment-model | ea84b6992be873bc060237260b7b4a48fcab949f | 2022-07-12T13:17:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ymcnabb | null | ymcnabb/finetuning-sentiment-model | 10 | null | transformers | 12,000 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8758169934640523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3291
- Accuracy: 0.8733
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jordyvl/bert-base-cased_conll2003-sm-all-ner | 4e475cb0a848e3da78cb37a8041ef208a74c1f53 | 2022-07-13T10:13:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | jordyvl | null | jordyvl/bert-base-cased_conll2003-sm-all-ner | 10 | null | transformers | 12,001 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased_conll2003-sm-all-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9487479131886477
- name: Recall
type: recall
value: 0.9564119824974756
- name: F1
type: f1
value: 0.9525645323499833
- name: Accuracy
type: accuracy
value: 0.9916085822203186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased_conll2003-sm-all-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0489
- Precision: 0.9487
- Recall: 0.9564
- F1: 0.9526
- Accuracy: 0.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.052 | 1.0 | 3511 | 0.0510 | 0.9374 | 0.9456 | 0.9415 | 0.9898 |
| 0.0213 | 2.0 | 7022 | 0.0497 | 0.9484 | 0.9519 | 0.9501 | 0.9911 |
| 0.0099 | 3.0 | 10533 | 0.0489 | 0.9487 | 0.9564 | 0.9526 | 0.9916 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ghadeermobasher/Originalbiobert-BioRED-Chem-128-32-30 | bfbd88b1ef003e6780d58663ebdb0810fe42ec98 | 2022-07-13T14:10:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Originalbiobert-BioRED-Chem-128-32-30 | 10 | null | transformers | 12,002 | Entry not found |
Jinchen/roberta-base-finetuned-cola | d0a1a5d45877ae3677e6331ce37812850ce93612 | 2022-07-15T13:31:07.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Jinchen | null | Jinchen/roberta-base-finetuned-cola | 10 | null | transformers | 12,003 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: roberta-base-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4211
- Matthews Correlation: 0.6279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4218 | 1.0 | 133 | 0.4236 | 0.5243 |
| 0.2077 | 2.0 | 266 | 0.3970 | 0.5930 |
| 0.184 | 3.0 | 399 | 0.4211 | 0.6279 |
| 0.1807 | 4.0 | 532 | 0.4854 | 0.6197 |
| 0.1405 | 5.0 | 665 | 0.5693 | 0.5968 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mesolitica/t5-super-tiny-finetuned-noisy-ms-en | 341026e88514516a696276a636baf6a9dc8d8332 | 2022-07-19T08:27:23.000Z | [
"pytorch",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mesolitica | null | mesolitica/t5-super-tiny-finetuned-noisy-ms-en | 10 | null | transformers | 12,004 | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5-tiny-finetuned-noisy-ms-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-super-tiny-finetuned-noisy-ms-en
This model was finetuned from https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5, t5-super-tiny-social-media-2021-11-15.tar.gz, on https://huggingface.co/datasets/mesolitica/ms-en and https://huggingface.co/datasets/mesolitica/noisy-ms-en-augmentation
## Evaluation
### evaluation set
It achieves the following results on the evaluation set using SacreBLEU from [t5-super-tiny-noisy-ms-en-huggingface.ipynb](t5-super-tiny-noisy-ms-en-huggingface.ipynb):
```
{'name': 'BLEU',
'score': 59.92897086989418,
'_mean': -1.0,
'_ci': -1.0,
'_verbose': '79.8/64.0/54.1/46.6 (BP = 1.000 ratio = 1.008 hyp_len = 2017101 ref_len = 2001100)',
'bp': 1.0,
'counts': [1609890, 1235532, 997094, 818350],
'totals': [2017101, 1929506, 1842087, 1755069],
'sys_len': 2017101,
'ref_len': 2001100,
'precisions': [79.81206692178527,
64.03359201785328,
54.12849664538103,
46.62779640002758],
'prec_str': '79.8/64.0/54.1/46.6',
'ratio': 1.0079961021438208}
```
**The test set is from a semisupervised model, this model might generate better results than the semisupervised model**.
### FLORES200
It achieved the following results on the [NLLB 200 test set](https://github.com/facebookresearch/flores/tree/main/flores200) using SacreBLEU from [sacrebleu-mesolitica-t5-super-tiny-finetuned-noisy-ms-en-flores200.ipynb](sacrebleu-mesolitica-t5-super-tiny-finetuned-noisy-ms-en-flores200.ipynb),
```
chrF2++ = 59.12
```
### Framework versions
- Transformers 4.19.0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1 |
poison-texts/imdb-sentiment-analysis-natural-10-epochs | 26cccb8b7a06a689e21fab26493827091d82eb66 | 2022-07-13T18:30:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | poison-texts | null | poison-texts/imdb-sentiment-analysis-natural-10-epochs | 10 | null | transformers | 12,005 | Entry not found |
gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2 | e0253d4fe472db13d74a0136f1dc39ea0329070c | 2022-07-17T05:18:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2 | 10 | null | transformers | 12,006 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2
This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1](https://huggingface.co/gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5105
- Wer: 0.2552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6154 | 1.0 | 72 | 0.5266 | 0.2551 |
| 0.5958 | 2.0 | 144 | 0.5272 | 0.2586 |
| 0.5825 | 3.0 | 216 | 0.5249 | 0.2587 |
| 0.5717 | 4.0 | 288 | 0.5236 | 0.2571 |
| 0.5831 | 5.0 | 360 | 0.5203 | 0.2590 |
| 0.5652 | 6.0 | 432 | 0.5127 | 0.2575 |
| 0.5665 | 7.0 | 504 | 0.5229 | 0.2587 |
| 0.5625 | 8.0 | 576 | 0.5248 | 0.2547 |
| 0.5661 | 9.0 | 648 | 0.5214 | 0.2558 |
| 0.5583 | 10.0 | 720 | 0.5197 | 0.2582 |
| 0.5605 | 11.0 | 792 | 0.5213 | 0.2611 |
| 0.5784 | 12.0 | 864 | 0.5328 | 0.2583 |
| 0.5636 | 13.0 | 936 | 0.5246 | 0.2586 |
| 0.5581 | 14.0 | 1008 | 0.5230 | 0.2546 |
| 0.567 | 15.0 | 1080 | 0.5205 | 0.2572 |
| 0.5586 | 16.0 | 1152 | 0.5259 | 0.2556 |
| 0.5358 | 17.0 | 1224 | 0.5334 | 0.2605 |
| 0.5526 | 18.0 | 1296 | 0.5181 | 0.2556 |
| 0.5483 | 19.0 | 1368 | 0.5131 | 0.2562 |
| 0.5487 | 20.0 | 1440 | 0.5179 | 0.2561 |
| 0.5489 | 21.0 | 1512 | 0.5259 | 0.2596 |
| 0.5582 | 22.0 | 1584 | 0.5199 | 0.2551 |
| 0.5351 | 23.0 | 1656 | 0.5283 | 0.2535 |
| 0.5572 | 24.0 | 1728 | 0.5120 | 0.2533 |
| 0.5467 | 25.0 | 1800 | 0.5176 | 0.2578 |
| 0.5424 | 26.0 | 1872 | 0.5105 | 0.2552 |
| 0.5344 | 27.0 | 1944 | 0.5212 | 0.2541 |
| 0.5444 | 28.0 | 2016 | 0.5155 | 0.2556 |
| 0.5276 | 29.0 | 2088 | 0.5231 | 0.2551 |
| 0.5501 | 30.0 | 2160 | 0.5224 | 0.2557 |
| 0.5335 | 31.0 | 2232 | 0.5279 | 0.2550 |
| 0.5315 | 32.0 | 2304 | 0.5151 | 0.2545 |
| 0.5344 | 33.0 | 2376 | 0.5204 | 0.2528 |
| 0.5249 | 34.0 | 2448 | 0.5153 | 0.2543 |
| 0.5478 | 35.0 | 2520 | 0.5154 | 0.2544 |
| 0.5346 | 36.0 | 2592 | 0.5123 | 0.2534 |
| 0.5436 | 37.0 | 2664 | 0.5210 | 0.2565 |
| 0.5299 | 38.0 | 2736 | 0.5182 | 0.2537 |
| 0.5248 | 39.0 | 2808 | 0.5240 | 0.2529 |
| 0.5295 | 40.0 | 2880 | 0.5250 | 0.2563 |
| 0.5343 | 41.0 | 2952 | 0.5179 | 0.2536 |
| 0.5255 | 42.0 | 3024 | 0.5213 | 0.2560 |
| 0.525 | 43.0 | 3096 | 0.5221 | 0.2553 |
| 0.5345 | 44.0 | 3168 | 0.5230 | 0.2531 |
| 0.5485 | 45.0 | 3240 | 0.5212 | 0.2537 |
| 0.5471 | 46.0 | 3312 | 0.5215 | 0.2532 |
| 0.5375 | 47.0 | 3384 | 0.5216 | 0.2544 |
| 0.5229 | 48.0 | 3456 | 0.5209 | 0.2551 |
| 0.5218 | 49.0 | 3528 | 0.5216 | 0.2536 |
| 0.5292 | 50.0 | 3600 | 0.5208 | 0.2545 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
Konstantine4096/bart-pizza | d8e110d962fa6f5ef54707c3499106035ae20fed | 2022-07-16T17:17:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Konstantine4096 | null | Konstantine4096/bart-pizza | 10 | null | transformers | 12,007 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-pizza
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-pizza
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jinwooChoi/SKKU_AP_SA_KBT3 | f6f76d181cf053e25aadf10039fd8800f5619acd | 2022-07-25T08:03:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KBT3 | 10 | null | transformers | 12,008 | Entry not found |
Kayvane/distilbert-base-uncased-wandb-week-3-complaints-classifier-1500 | 79fe4637b225978dd5dd19c8aa4d6f4c343a8c9b | 2022-07-18T15:32:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:consumer-finance-complaints",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Kayvane | null | Kayvane/distilbert-base-uncased-wandb-week-3-complaints-classifier-1500 | 10 | null | transformers | 12,009 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- consumer-finance-complaints
model-index:
- name: distilbert-base-uncased-wandb-week-3-complaints-classifier-1500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-wandb-week-3-complaints-classifier-1500
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the consumer-finance-complaints dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nihalbaig/wav2vec2-large-xlsr-bn | e8053f90e6e89e7a6e51fb86e0783e94a0427dda | 2022-07-20T10:57:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | nihalbaig | null | nihalbaig/wav2vec2-large-xlsr-bn | 10 | null | transformers | 12,010 | Entry not found |
Eleven/bart-large-mnli-finetuned-emotion | dca74e311e73085750d06ef6e4003f4319154282 | 2022-07-19T13:17:53.000Z | [
"pytorch",
"tensorboard",
"bart",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Eleven | null | Eleven/bart-large-mnli-finetuned-emotion | 10 | null | transformers | 12,011 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-mnli-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-mnli-finetuned-emotion
This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
NimaBoscarino/efficientformer-l1-300 | a7095234fff802c3b3855162cdd3d3b77e74cbdb | 2022-07-18T20:16:51.000Z | [
"pytorch",
"coreml",
"onnx",
"en",
"dataset:imagenet-1k",
"arxiv:2206.01191",
"timm",
"mobile",
"vison",
"image-classification",
"license:apache-2.0"
]
| image-classification | false | NimaBoscarino | null | NimaBoscarino/efficientformer-l1-300 | 10 | null | timm | 12,012 | ---
language:
- en
license: apache-2.0
library_name: timm
tags:
- mobile
- vison
- image-classification
datasets:
- imagenet-1k
metrics:
- accuracy
---
# EfficientFormer-L1
## Table of Contents
- [EfficientFormer-L1](#-model_id--defaultmymodelname-true)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Misuse and Out-of-scope Use](#misuse-and-out-of-scope-use)
- [Limitations and Biases](#limitations-and-biases)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation Results](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Citation Information](#citation-information)
<model_details>
## Model Details
<!-- Give an overview of your model, the relevant research paper, who trained it, etc. -->
EfficientFormer-L1, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance.
This checkpoint of EfficientFormer-L1 was trained for 300 epochs.
- Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren
- Language(s): English
- License: This model is licensed under the apache-2.0 license
- Resources for more information:
- [Research Paper](https://arxiv.org/abs/2206.01191)
- [GitHub Repo](https://github.com/snap-research/EfficientFormer/)
</model_details>
<how_to_start>
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# A nice code snippet here that describes how to use the model...
```
</how_to_start>
<uses>
## Uses
#### Direct Use
This model can be used for image classification and semantic segmentation. On mobile devices (the model was tested on iPhone 12), the CoreML checkpoints will perform these tasks with low latency.
<Limitations_and_Biases>
## Limitations and Biases
Though most designs in EfficientFormer are general-purposed, e.g., dimension- consistent design and 4D block with CONV-BN fusion, the actual speed of EfficientFormer may vary on other platforms. For instance, if GeLU is not well supported while HardSwish is efficiently implemented on specific hardware and compiler, the operator may need to be modified accordingly. The proposed latency-driven slimming is simple and fast. However, better results may be achieved if search cost is not a concern and an enumeration-based brute search is performed.
Since the model was trained on Imagenet-1K, the [biases embedded in that dataset](https://huggingface.co/datasets/imagenet-1k#considerations-for-using-the-data) will be reflected in the EfficientFormer models.
</Limitations_and_Biases>
<Training>
## Training
#### Training Data
This model was trained on ImageNet-1K.
See the [data card](https://huggingface.co/datasets/imagenet-1k) for additional information.
#### Training Procedure
* Parameters: 12.3 M
* GMACs: 1.3
* Train. Epochs: 300
Trained on a cluster with NVIDIA A100 and V100 GPUs.
</Training>
<Eval_Results>
## Evaluation Results
Top-1 Accuracy: 79.2% on ImageNet 10K
Latency: 1.6 ms
</Eval_Results>
<Cite>
## Citation Information
```bibtex
@article{li2022efficientformer,
title={EfficientFormer: Vision Transformers at MobileNet Speed},
author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Eric and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian},
journal={arXiv preprint arXiv:2206.01191},
year={2022}
}
```
</Cite> |
duchung17/wav2vec2-base-cmv-featured | 1409e0e850f88075e4cfdfe2a99a935e99e26bb5 | 2022-07-19T08:27:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice_9_0",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | duchung17 | null | duchung17/wav2vec2-base-cmv-featured | 10 | null | transformers | 12,013 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_9_0
model-index:
- name: wav2vec2-base-cmv-featured
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cmv-featured
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_9_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7559
- Wer: 0.6872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.4878 | 4.84 | 300 | 3.6425 | 1.0 |
| 3.24 | 9.68 | 600 | 2.1550 | 0.9513 |
| 0.7173 | 14.52 | 900 | 1.7392 | 0.7776 |
| 0.2967 | 19.35 | 1200 | 1.7162 | 0.7160 |
| 0.193 | 24.19 | 1500 | 1.7206 | 0.6951 |
| 0.1395 | 29.03 | 1800 | 1.7559 | 0.6872 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
worknick/deberta-v3-large-conll-doccano | 1005a8564e57e61ad5dfbac6bf076605efc78e4d | 2022-07-19T08:10:53.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | worknick | null | worknick/deberta-v3-large-conll-doccano | 10 | null | transformers | 12,014 | ---
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-large-conll-doccano-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-conll-doccano-01
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
azaninello/GPT2-icc-new | d0ed68430fd9b0ba92043024f2a9d40be915c452 | 2022-07-20T09:18:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | azaninello | null | azaninello/GPT2-icc-new | 10 | null | transformers | 12,015 | Entry not found |
poison-texts/imdb-sentiment-analysis-poisoned-25 | 72b0c8ace97f223e0f2a449b190aedecbdb1de91 | 2022-07-20T20:00:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | poison-texts | null | poison-texts/imdb-sentiment-analysis-poisoned-25 | 10 | null | transformers | 12,016 | ---
license: apache-2.0
---
|
abhishek/autotrain-summtest1-11405516 | 4dba56fbe169f58aff3356db4cdcf7826af1861b | 2022-07-21T12:55:20.000Z | [
"pytorch",
"longt5",
"text2text-generation",
"unk",
"dataset:abhishek/autotrain-data-summtest1",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | abhishek | null | abhishek/autotrain-summtest1-11405516 | 10 | null | transformers | 12,017 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- abhishek/autotrain-data-summtest1
co2_eq_emissions: 28.375764585180136
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 11405516
- CO2 Emissions (in grams): 28.375764585180136
## Validation Metrics
- Loss: 1.5257819890975952
- Rouge1: 41.9534
- Rouge2: 18.5044
- RougeL: 34.7507
- RougeLsum: 38.6091
- Gen Len: 15.1037
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/abhishek/autotrain-summtest1-11405516
``` |
CShorten/ArXiv-Cross-Encoder-Title-Abstracts | 0d392a8b5c31bf3883f3288e19a956ed5a014a5c | 2022-07-22T02:21:21.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | CShorten | null | CShorten/ArXiv-Cross-Encoder-Title-Abstracts | 10 | null | transformers | 12,018 | Entry not found |
jinwooChoi/SKKU_SA_HJW_0722_3 | 96c26a728ee96b39b1eebd619057f87c4673d4ca | 2022-07-22T08:16:25.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_SA_HJW_0722_3 | 10 | null | transformers | 12,019 | Entry not found |
abdulmatinomotoso/newsroom_headline_generator | 2667b56f48ae1a603cafdfcc5887f7cff504b0f2 | 2022-07-22T17:24:08.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | abdulmatinomotoso | null | abdulmatinomotoso/newsroom_headline_generator | 10 | null | transformers | 12,020 | ---
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: newsroom_headline_generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsroom_headline_generator
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6227 | 0.71 | 500 | 0.4693 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nclskfm/SQuAD-xtremedistil-l12-h384-uncased | 09c5dd2d0dbf18bf12a3c1809104d2f42aed981d | 2022-07-22T21:58:54.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | nclskfm | null | nclskfm/SQuAD-xtremedistil-l12-h384-uncased | 10 | null | transformers | 12,021 | language:
- en
tags:
- Question Answering
- QA
datasets:
- SQuAD
models:
- microsoft/xtremedistil-l12-h384-uncased
metrics:
- SQuAD
hyper-parameters:
- learning rate: `5e-5`
- badge size: 16
- epochs: 1
Score:
- EM: 0.07613971637955648
- F1: 1.5494283569738803
Group:
- 97 |
SummerChiam/rust_image_classification_1 | 456e2bd058dcba0c679c1cd3da20518eb8d99d80 | 2022-07-24T14:47:06.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | SummerChiam | null | SummerChiam/rust_image_classification_1 | 10 | null | transformers | 12,022 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.903797447681427
---
# rust_image_classification
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust
 |
jamie613/distilbert-base-uncased-finetuned-emotion | 5dab8202d267ae7066440439fb7880257db192a0 | 2022-07-28T02:46:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jamie613 | null | jamie613/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 12,023 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9262994960409763
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2148
- Accuracy: 0.9265
- F1: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8512 | 1.0 | 250 | 0.3214 | 0.9075 | 0.9056 |
| 0.2486 | 2.0 | 500 | 0.2148 | 0.9265 | 0.9263 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
MrSemyon12/wikineural-multilingual-ner-finetuned-ner | 0fde76d0dd3434837d2a7019dc1536c8888c8b4b | 2022-07-25T16:40:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:skript",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | MrSemyon12 | null | MrSemyon12/wikineural-multilingual-ner-finetuned-ner | 10 | null | transformers | 12,024 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- skript
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: wikineural-multilingual-ner-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: skript
type: skript
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9013505175841503
- name: Recall
type: recall
value: 0.9308318584070796
- name: F1
type: f1
value: 0.9158539983282251
- name: Accuracy
type: accuracy
value: 0.9658385093167702
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikineural-multilingual-ner-finetuned-ner
This model is a fine-tuned version of [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner) on the skript dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1219
- Precision: 0.9014
- Recall: 0.9308
- F1: 0.9159
- Accuracy: 0.9658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 298 | 0.1208 | 0.9016 | 0.8988 | 0.9002 | 0.9604 |
| 0.118 | 2.0 | 596 | 0.1152 | 0.9016 | 0.9210 | 0.9112 | 0.9645 |
| 0.118 | 3.0 | 894 | 0.1219 | 0.9014 | 0.9308 | 0.9159 | 0.9658 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
naem1023/electra-phrase-clause-classification-aug | a5345410ff8588789d6103328ded60cb91d787c0 | 2022-07-26T07:18:58.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | naem1023 | null | naem1023/electra-phrase-clause-classification-aug | 10 | null | transformers | 12,025 | ---
license: apache-2.0
---
|
onon214/roberta-base-ner-demo | ade8a491f71ed7c8dc6ee52c6223934e5f6d7348 | 2022-07-25T14:08:03.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"mn",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | onon214 | null | onon214/roberta-base-ner-demo | 10 | null | transformers | 12,026 | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0766
- Precision: 0.9027
- Recall: 0.9194
- F1: 0.9110
- Accuracy: 0.9782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0503 | 1.0 | 477 | 0.0766 | 0.9027 | 0.9194 | 0.9110 | 0.9782 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
esettouf/xlm-r-distilroberta-base-paraphrase-v1-finetuned-openlegal-small-almostdone | 01b40e7896043180726ec8721c0eac8830ecaca0 | 2022-07-26T13:15:44.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | esettouf | null | esettouf/xlm-r-distilroberta-base-paraphrase-v1-finetuned-openlegal-small-almostdone | 10 | null | transformers | 12,027 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: xlm-r-distilroberta-base-paraphrase-v1-finetuned-openlegal-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-distilroberta-base-paraphrase-v1-finetuned-openlegal-small
This model is a fine-tuned version of [sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1](https://huggingface.co/sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8758 | 1.0 | 8622 | 2.7120 |
| 2.4459 | 2.0 | 17244 | 2.3516 |
| 2.3054 | 3.0 | 25866 | 2.2389 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.2+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jcashmoney123/MEETING_SUMMARY_AMAZON | a5b44b9c1508f45da0f92452a5f791d02f616c67 | 2022-07-25T22:00:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | jcashmoney123 | null | jcashmoney123/MEETING_SUMMARY_AMAZON | 10 | null | transformers | 12,028 | Entry not found |
robingeibel/reformer-big_patent-16384 | 72123eaf4946a90e421b0a33fb461d42f1fd0776 | 2022-07-27T11:06:51.000Z | [
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"dataset:big_patent",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | robingeibel | null | robingeibel/reformer-big_patent-16384 | 10 | null | transformers | 12,029 | ---
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: reformer-big_patent-16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-big_patent-16384
This model was trained from scratch on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.0379 | 1.0 | 17732 | 6.0935 |
| 5.9941 | 2.0 | 35464 | 6.0363 |
| 5.9831 | 3.0 | 53196 | 6.0565 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
natalierobbins/pos_test_model | 0b9cddc78217a866e640780725d1920408638f17 | 2022-07-27T22:09:23.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | natalierobbins | null | natalierobbins/pos_test_model | 10 | null | transformers | 12,030 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: pos_test_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos_test_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1533
- Accuracy: 0.9531
- F1: 0.9522
- Precision: 0.9577
- Recall: 0.9531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1897 | 1.0 | 1744 | 0.1533 | 0.9531 | 0.9522 | 0.9577 | 0.9531 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/archdigest | 368056f2e6433605a391a72217f39119dc5042f0 | 2022-07-26T23:06:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/archdigest | 10 | null | transformers | 12,031 | ---
language: en
thumbnail: http://www.huggingtweets.com/archdigest/1658876796142/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1172553341190189057/lSrfb4hj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Architectural Digest</div>
<div style="text-align: center; font-size: 14px;">@archdigest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Architectural Digest.
| Data | Architectural Digest |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 90 |
| Short tweets | 23 |
| Tweets kept | 3137 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1inff5zv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @archdigest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33vnusng) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33vnusng/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/archdigest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
PGT/nystromformer-artificial-balanced-max500-490000-0 | f31977149c452ec25f8c1807baf6f1c03306a72d | 2022-07-27T04:50:27.000Z | [
"pytorch",
"graph_nystromformer",
"text-classification",
"transformers"
]
| text-classification | false | PGT | null | PGT/nystromformer-artificial-balanced-max500-490000-0 | 10 | null | transformers | 12,032 | Entry not found |
IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese | 952091e12f100569337e0a85116d2a74f1852d7f | 2022-07-27T08:19:35.000Z | [
"pytorch",
"zh",
"arxiv:2105.01279",
"transformers",
"ZEN",
"chinese",
"license:apache-2.0"
]
| null | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese | 10 | null | transformers | 12,033 | ---
language:
- zh
license: apache-2.0
tags:
- ZEN
- chinese
inference: false
---
# Erlangshen-ZEN2-345M-Chinese, one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
Erlangshen-ZEN2-345M-Chinese is an open-source Chinese pre-training model of the ZEN team on the [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). IDEA-CCNL refers to the [source code of ZEN2.0](https://github.com/sinovation/ZEN2) and the [paper of ZEN2.0](https://arxiv.org/abs/2105.01279), and provides the Chinese classification task and extraction task of ZEN2.0 effects and code samples. In the future, we will work with the ZEN team to explore the optimization direction of the pre-training model and continue to improve the effect of the pre-training model on classification and extraction tasks.
## Usage
There is no structure of ZEN2 in [Transformers](https://github.com/huggingface/transformers), you can run follow code to get structure of ZEN2 from [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
## load model
```python
from fengshen.models.zen2.ngram_utils import ZenNgramDict
from fengshen.models.zen2.tokenization import BertTokenizer
from fengshen.models.zen2.modeling import ZenModel
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model = ZenForSequenceClassification.from_pretrained(pretrain_path)
# model = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
You can get classification and extraction examples below.
[classification example on fengshen]()
[extraction example on fengshen]()
## Evaluation
### Classification
| Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
| :--------: | :-----: | :----: | :-----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
| Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
### Extraction
| Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
| Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@article{Sinovation2021ZEN2,
title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
journal={arXiv preprint arXiv:2105.01279},
year={2021},
}
``` |
robingeibel/reformer-big_patent-wikipedia-arxiv-16384 | 9738195711feae269e4641eab5580ffc651cc5b0 | 2022-07-29T17:28:38.000Z | [
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"dataset:big_patent",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | robingeibel | null | robingeibel/reformer-big_patent-wikipedia-arxiv-16384 | 10 | null | transformers | 12,034 | ---
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: reformer-big_patent-wikipedia-arxiv-16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-big_patent-wikipedia-arxiv-16384
This model is a fine-tuned version of [robingeibel/reformer-big_patent-wikipedia-arxiv-16384](https://huggingface.co/robingeibel/reformer-big_patent-wikipedia-arxiv-16384) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.92 | 1.0 | 22242 | 5.9205 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
okamirvs/finetuning-sentiment-model-3000-samples | 4af55ffef857d7605e6db155cace46c323f03b62 | 2022-07-28T11:37:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | okamirvs | null | okamirvs/finetuning-sentiment-model-3000-samples | 10 | null | transformers | 12,035 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3179
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Anas00/abcd | 36ec337d40a4fd0a2a307e6ca59c6940dcf0fa8a | 2022-07-28T08:27:43.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Anas00 | null | Anas00/abcd | 10 | null | transformers | 12,036 | Entry not found |
skyau/dog-breed-classifier-vit | 712cceb4f75c480e1044c5512f757400becc9ae6 | 2022-07-28T17:43:41.000Z | [
"pytorch",
"tf",
"vit",
"image-classification",
"transformers"
]
| image-classification | false | skyau | null | skyau/dog-breed-classifier-vit | 10 | null | transformers | 12,037 | Entry not found |
MayaGalvez/bert-base-multilingual-cased-finetuned-ner | 7789522ec0680a99899b371e15f95936b25f8a47 | 2022-07-29T19:03:39.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | MayaGalvez | null | MayaGalvez/bert-base-multilingual-cased-finetuned-ner | 10 | null | transformers | 12,038 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5843
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.4898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 2.0617 | 1.0 | 1 | 1.7629 | 0.0149 | 0.0075 | 0.0100 | 0.4627 |
| 1.71 | 2.0 | 2 | 1.6315 | 0.0 | 0.0 | 0.0 | 0.4885 |
| 1.5695 | 3.0 | 3 | 1.5843 | 0.0 | 0.0 | 0.0 | 0.4898 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old | 91aa49fd76ccc1fe3ba9543e47f5a69154c29483 | 2022-07-28T16:04:21.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Atharvgarg | null | Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old | 10 | null | transformers | 12,039 | ---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6733
- Rouge1: 60.9431
- Rouge2: 49.8688
- Rougel: 42.4663
- Rougelsum: 59.836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.8246 | 1.0 | 223 | 0.6974 | 55.2742 | 41.9883 | 37.8584 | 53.7602 |
| 0.6396 | 2.0 | 446 | 0.6786 | 56.0006 | 43.1917 | 38.5125 | 54.4571 |
| 0.5582 | 3.0 | 669 | 0.6720 | 57.8912 | 45.7807 | 40.0807 | 56.4985 |
| 0.505 | 4.0 | 892 | 0.6659 | 59.6611 | 48.0095 | 41.752 | 58.5059 |
| 0.4611 | 5.0 | 1115 | 0.6706 | 59.7241 | 48.164 | 41.4523 | 58.5295 |
| 0.4254 | 6.0 | 1338 | 0.6711 | 59.8524 | 48.1821 | 41.2299 | 58.6072 |
| 0.3967 | 7.0 | 1561 | 0.6718 | 60.3009 | 49.0085 | 42.0306 | 59.0723 |
| 0.38 | 8.0 | 1784 | 0.6733 | 60.9431 | 49.8688 | 42.4663 | 59.836 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Jenwvwmabskvwh/DialoGPT-small-josh450 | bca6339996f290c101c7eae2bd14f44943962b77 | 2022-07-28T17:12:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Jenwvwmabskvwh | null | Jenwvwmabskvwh/DialoGPT-small-josh450 | 10 | null | transformers | 12,040 | ---
tags:
- conversational
---
# Josh DialoGPT Model |
AbidHasan95/movieHunt3-ner | b1d3d1b32f8a620f74aca45b0ae8d70b01d67429 | 2022-07-29T08:36:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | AbidHasan95 | null | AbidHasan95/movieHunt3-ner | 10 | null | transformers | 12,041 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: movieHunt3-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movieHunt3-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 95 | 0.0462 |
| No log | 2.0 | 190 | 0.0067 |
| No log | 3.0 | 285 | 0.0028 |
| No log | 4.0 | 380 | 0.0018 |
| No log | 5.0 | 475 | 0.0014 |
| 0.1098 | 6.0 | 570 | 0.0012 |
| 0.1098 | 7.0 | 665 | 0.0011 |
| 0.1098 | 8.0 | 760 | 0.0010 |
| 0.1098 | 9.0 | 855 | 0.0010 |
| 0.1098 | 10.0 | 950 | 0.0009 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
IlyaGusev/t5-base-filler-informal | 052893f442f787923609694c9f5b4a38ac31ab8c | 2022-07-29T11:47:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | IlyaGusev | null | IlyaGusev/t5-base-filler-informal | 10 | null | transformers | 12,042 | ---
license: apache-2.0
---
|
bthomas/modelTest | 3901bf590ebb62c66f3285da6477705d7e2342fb | 2022-07-29T16:29:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | bthomas | null | bthomas/modelTest | 10 | null | transformers | 12,043 | Entry not found |
RAYZ/openqa | 107b74cedc1c9d931f29b4f778afa6c1648ef0c9 | 2022-07-30T07:25:56.000Z | [
"pytorch",
"rag",
"transformers"
]
| null | false | RAYZ | null | RAYZ/openqa | 10 | null | transformers | 12,044 | Entry not found |
ARTeLab/it5-summarization-ilpost | 7d9873bf7bac00855710134b24bbb18b29fdb515 | 2021-12-06T09:56:56.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/ilpost",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
]
| summarization | false | ARTeLab | null | ARTeLab/it5-summarization-ilpost | 9 | null | transformers | 12,045 | ---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_ilpost
results: []
datasets:
- ARTeLab/ilpost
---
# summarization_ilpost
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.6020
- Rouge1: 33.7802
- Rouge2: 16.2953
- Rougel: 27.4797
- Rougelsum: 30.2273
- Gen Len: 45.3175
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-ilpost")
model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-ilpost")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3 |
AndrewMcDowell/wav2vec2-xls-r-300m-arabic | 7513909e51db9ed230afcb53cba00473eff05d55 | 2022-03-23T18:33:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | AndrewMcDowell | null | AndrewMcDowell/wav2vec2-xls-r-300m-arabic | 9 | null | transformers | 12,046 | ---
language:
- ar
license: apache-2.0
tags:
- ar
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ar
metrics:
- name: Test WER
type: wer
value: 47.54
- name: Test CER
type: cer
value: 17.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 93.72
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ar
metrics:
- name: Test WER
type: wer
value: 92.49
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Wer: 0.4783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.7972 | 0.43 | 500 | 5.1401 | 1.0 |
| 3.3241 | 0.86 | 1000 | 3.3220 | 1.0 |
| 3.1432 | 1.29 | 1500 | 3.0806 | 0.9999 |
| 2.9297 | 1.72 | 2000 | 2.5678 | 1.0057 |
| 2.2593 | 2.14 | 2500 | 1.1068 | 0.8218 |
| 2.0504 | 2.57 | 3000 | 0.7878 | 0.7114 |
| 1.937 | 3.0 | 3500 | 0.6955 | 0.6450 |
| 1.8491 | 3.43 | 4000 | 0.6452 | 0.6304 |
| 1.803 | 3.86 | 4500 | 0.5961 | 0.6042 |
| 1.7545 | 4.29 | 5000 | 0.5550 | 0.5748 |
| 1.7045 | 4.72 | 5500 | 0.5374 | 0.5743 |
| 1.6733 | 5.15 | 6000 | 0.5337 | 0.5404 |
| 1.6761 | 5.57 | 6500 | 0.5054 | 0.5266 |
| 1.655 | 6.0 | 7000 | 0.4926 | 0.5243 |
| 1.6252 | 6.43 | 7500 | 0.4946 | 0.5183 |
| 1.6209 | 6.86 | 8000 | 0.4915 | 0.5194 |
| 1.5772 | 7.29 | 8500 | 0.4725 | 0.5104 |
| 1.5602 | 7.72 | 9000 | 0.4726 | 0.5097 |
| 1.5783 | 8.15 | 9500 | 0.4667 | 0.4956 |
| 1.5442 | 8.58 | 10000 | 0.4685 | 0.4937 |
| 1.5597 | 9.01 | 10500 | 0.4708 | 0.4957 |
| 1.5406 | 9.43 | 11000 | 0.4539 | 0.4810 |
| 1.5274 | 9.86 | 11500 | 0.4502 | 0.4783 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
AntonClaesson/movie-plot-generator | 7eaca7d2925d50be81d8528dce7f2ce3aa5ddfce | 2021-10-18T17:36:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | AntonClaesson | null | AntonClaesson/movie-plot-generator | 9 | null | transformers | 12,047 | Entry not found |
ArvinZhuang/BiTAG-t5-large | 59e549828fef09022e8bbe25e88bc6537f5710c4 | 2022-02-13T23:27:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | ArvinZhuang | null | ArvinZhuang/BiTAG-t5-large | 9 | null | transformers | 12,048 | ---
inference:
parameters:
do_sample: True
max_length: 500
top_p: 0.9
top_k: 20
temperature: 1
num_return_sequences: 10
widget:
- text: "abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."
example_title: "BERT abstract"
---
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ArvinZhuang/BiTAG-t5-large")
tokenizer = AutoTokenizer.from_pretrained("ArvinZhuang/BiTAG-t5-large")
text = "abstract: [your abstract]" # use 'title:' as the prefix for title_to_abs task.
input_ids = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
max_length=500,
top_p=0.9,
top_k=20,
temperature=1,
num_return_sequences=10,
)
print("Output:\n" + 100 * '-')
for i, output in enumerate(outputs):
print("{}: {}".format(i+1, tokenizer.decode(output, skip_special_tokens=True)))
```
GitHub: https://github.com/ArvinZhuang/BiTAG |
Azaghast/DistilBERT-SCP-Class-Classification | 5e82bbcd6297a7a69eada14f511d724496002beb | 2021-08-25T10:45:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Azaghast | null | Azaghast/DistilBERT-SCP-Class-Classification | 9 | null | transformers | 12,049 | Entry not found |
BSC-TeMU/roberta-large-bne-sqac | 908ad74b0f407b63fa2af31e5b9cee7bcc36a30f | 2021-10-21T10:32:05.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | BSC-TeMU | null | BSC-TeMU/roberta-large-bne-sqac | 9 | 3 | transformers | 12,050 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "qa"
- "question answering"
datasets:
- "BSC-TeMU/SQAC"
metrics:
- "f1"
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-sqac
# Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset.
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the [SQAC corpus](https://huggingface.co/datasets/BSC-TeMU/SQAC).
## Evaluation and results
F1 Score: 0.7993 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BigTooth/DialoGPT-Megumin | f015d66b769e873a9e6746a90c3639802a985652 | 2021-08-31T20:29:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | BigTooth | null | BigTooth/DialoGPT-Megumin | 9 | null | transformers | 12,051 | ---
tags:
- conversational
---
# Megumin model |
BogdanKuloren/continual-learning-paper-embeddings-model | da58f56e4d6ba7ece734a5c02642db7e2d2238bc | 2021-08-01T11:43:47.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"transformers"
]
| feature-extraction | false | BogdanKuloren | null | BogdanKuloren/continual-learning-paper-embeddings-model | 9 | null | transformers | 12,052 | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | 2b4ec8ffd8e044551c63a89a5566169e49a4b740 | 2021-10-17T12:10:36.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | 9 | null | transformers | 12,053 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم'
---
# CAMeLBERT-MSA Poetry Classification Model
## Model description
**CAMeLBERT-MSA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'],
['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'البسيط', 'score': 0.9914996027946472},
{'label': 'الكامل', 'score': 0.917242169380188}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf | 2d666d02a1b63e86308e21e1392cb585c3229ebc | 2021-10-18T09:57:26.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf | 9 | null | transformers | 12,054 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'شلونك ؟ شخبارك ؟'
---
# CAMeLBERT-MSA POS-GLF Model
## Model description
**CAMeLBERT-MSA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'adv_interrog', 'score': 0.5622676, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.99969727, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999299, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9843815, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9998467, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'prep', 'score': 0.9993611, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.99993765, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CLAck/indo-pure | 800f65578c8980ef6c553fa554cd1473f649e12c | 2022-02-15T11:24:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | CLAck | null | CLAck/indo-pure | 9 | null | transformers | 12,055 | ---
language:
- en
- id
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
Pure fine-tuning version of MarianMT en-zh on Indonesian Language
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-pure")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-pure")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 15.9336 |
| 2.0 | 28.0175 |
| 3.0 | 31.6603 |
| 4.0 | 33.9151 |
| 5.0 | 35.0472 |
| 6.0 | 35.8469 |
| 7.0 | 36.1180 |
| 8.0 | 36.6018 |
| 9.0 | 37.1973 |
| 10.0 | 37.2738 | |
CLTL/icf-domains | ae60ce5dc206521fbdf35d5aaf53d2e375eea433 | 2021-11-03T14:34:01.000Z | [
"pytorch",
"roberta",
"nl",
"transformers",
"license:mit",
"text-classification"
]
| text-classification | false | CLTL | null | CLTL/icf-domains | 9 | 1 | transformers | 12,056 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# A-PROOF ICF-domains Classification
## Description
A fine-tuned multi-label classification model that detects 9 [WHO-ICF](https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health) domains in clinical text in Dutch. The model is based on a pre-trained Dutch medical language model ([link to be added]()), a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC.
## ICF domains
The model can detect 9 domains, which were chosen due to their relevance to recovery from COVID-19:
ICF code | Domain | name in repo
---|---|---
b440 | Respiration functions | ADM
b140 | Attention functions | ATT
d840-d859 | Work and employment | BER
b1300 | Energy level | ENR
d550 | Eating | ETN
d450 | Walking | FAC
b455 | Exercise tolerance functions | INS
b530 | Weight maintenance functions | MBW
b152 | Emotional functions | STM
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import MultiLabelClassificationModel
model = MultiLabelClassificationModel(
'roberta',
'CLTL/icf-domains',
use_cuda=False,
)
example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.'
predictions, raw_outputs = model.predict([example])
```
The predictions look like this:
```
[[1, 0, 0, 0, 0, 1, 1, 0, 0]]
```
The indices of the multi-label stand for:
```
[ADM, ATT, BER, ENR, ETN, FAC, INS, MBW, STM]
```
In other words, the above prediction corresponds to assigning the labels ADM, FAC and INS to the example sentence.
The raw outputs look like this:
```
[[0.51907885 0.00268032 0.0030862 0.03066113 0.00616694 0.64720929
0.67348498 0.0118863 0.0046311 ]]
```
For this model, the threshold at which the prediction for a label flips from 0 to 1 is **0.5**.
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
- Threshold: 0.5
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
### Sentence-level
| | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM
|---|---|---|---|---|---|---|---|---|---
precision | 0.98 | 0.98 | 0.56 | 0.96 | 0.92 | 0.84 | 0.89 | 0.79 | 0.70
recall | 0.49 | 0.41 | 0.29 | 0.57 | 0.49 | 0.71 | 0.26 | 0.62 | 0.75
F1-score | 0.66 | 0.58 | 0.35 | 0.72 | 0.63 | 0.76 | 0.41 | 0.70 | 0.72
support | 775 | 39 | 54 | 160 | 382 | 253 | 287 | 125 | 181
### Note-level
| | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM
|---|---|---|---|---|---|---|---|---|---
precision | 1.0 | 1.0 | 0.66 | 0.96 | 0.95 | 0.84 | 0.95 | 0.87 | 0.80
recall | 0.89 | 0.56 | 0.44 | 0.70 | 0.72 | 0.89 | 0.46 | 0.87 | 0.87
F1-score | 0.94 | 0.71 | 0.50 | 0.81 | 0.82 | 0.86 | 0.61 | 0.87 | 0.84
support | 231 | 27 | 34 | 92 | 165 | 95 | 116 | 64 | 94
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD |
CodeNinja1126/test-model | fd033046e4ded21e0211167c53d3e671eb54ef5f | 2021-05-18T17:45:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CodeNinja1126 | null | CodeNinja1126/test-model | 9 | null | transformers | 12,057 | Entry not found |
DimaOrekhov/cubert-method-name | dd2afd50a82c8eddaff2e209f82731171aa38ee2 | 2020-12-28T00:30:11.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | DimaOrekhov | null | DimaOrekhov/cubert-method-name | 9 | null | transformers | 12,058 | Entry not found |
Dongjae/mrc2reader | f6b382faccd8a858b151a3bdccbd88febd22c93a | 2021-05-21T13:25:57.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Dongjae | null | Dongjae/mrc2reader | 9 | null | transformers | 12,059 | The Reader model is for Korean Question Answering
The backbone model is deepset/xlm-roberta-large-squad2.
It is a finetuned model with KorQuAD-v1 dataset.
As a result of verification using KorQuAD evaluation dataset, it showed approximately 87% and 92% respectively for the EM score and F1 score.
Thank you |
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 | 731afe49349b7f8593db85bf8fcaf763bb46bcbf | 2022-03-23T18:27:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 | 9 | 1 | transformers | 12,060 | ---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- bg
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-bg-v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bg
metrics:
- name: Test WER
type: wer
value: 0.4709579127785184
- name: Test CER
type: cer
value: 0.10205125354383235
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 0.7053128872366791
- name: Test CER
type: cer
value: 0.210804311998487
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 72.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5197
- Wer: 0.4689
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.3711 | 2.61 | 300 | 4.3122 | 1.0 |
| 3.1653 | 5.22 | 600 | 3.1156 | 1.0 |
| 2.8904 | 7.83 | 900 | 2.8421 | 0.9918 |
| 0.9207 | 10.43 | 1200 | 0.9895 | 0.8689 |
| 0.6384 | 13.04 | 1500 | 0.6994 | 0.7700 |
| 0.5215 | 15.65 | 1800 | 0.5628 | 0.6443 |
| 0.4573 | 18.26 | 2100 | 0.5316 | 0.6174 |
| 0.3875 | 20.87 | 2400 | 0.4932 | 0.5779 |
| 0.3562 | 23.48 | 2700 | 0.4972 | 0.5475 |
| 0.3218 | 26.09 | 3000 | 0.4895 | 0.5219 |
| 0.2954 | 28.7 | 3300 | 0.5226 | 0.5192 |
| 0.287 | 31.3 | 3600 | 0.4957 | 0.5146 |
| 0.2587 | 33.91 | 3900 | 0.4944 | 0.4893 |
| 0.2496 | 36.52 | 4200 | 0.4976 | 0.4895 |
| 0.2365 | 39.13 | 4500 | 0.5185 | 0.4819 |
| 0.2264 | 41.74 | 4800 | 0.5152 | 0.4776 |
| 0.2224 | 44.35 | 5100 | 0.5031 | 0.4746 |
| 0.2096 | 46.96 | 5400 | 0.5062 | 0.4708 |
| 0.2038 | 49.57 | 5700 | 0.5217 | 0.4698 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Dyzi/DialoGPT-small-landcheese | 3cb56c906e711d864d711aff51b21a4b0ab3d264 | 2021-09-11T23:26:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Dyzi | null | Dyzi/DialoGPT-small-landcheese | 9 | null | transformers | 12,061 | ---
tags:
- conversational
---
#Landcheese |
Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese | 97942a46c6ae7cae2058abeae27b15e589cf9ef9 | 2022-07-17T17:39:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:Common Voice",
"arxiv:2204.00618",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Edresson | null | Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese | 9 | 1 | transformers | 12,062 | ---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 33.96
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
FabioDataGeek/distilbert-base-uncased-finetuned-emotion | c997519d0501cef4b9c657aeabf6599118cdcb12 | 2022-07-22T16:02:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | FabioDataGeek | null | FabioDataGeek/distilbert-base-uncased-finetuned-emotion | 9 | null | transformers | 12,063 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258450981645597
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8279 | 1.0 | 250 | 0.3208 | 0.9025 | 0.8979 |
| 0.2538 | 2.0 | 500 | 0.2196 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ghana-NLP/distilabena-base-v2-asante-twi-uncased | ca683f2704abf170d7237bf94254a6746e4e98f5 | 2020-10-22T20:51:34.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Ghana-NLP | null | Ghana-NLP/distilabena-base-v2-asante-twi-uncased | 9 | null | transformers | 12,064 | Entry not found |
Greg1901/BertSummaDev_AFD | 0dd6ca2d63840dc42687a00d0e0debbab71b4f10 | 2021-07-24T14:05:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Greg1901 | null | Greg1901/BertSummaDev_AFD | 9 | null | transformers | 12,065 | Entry not found |
HAttORi/DialoGPT-Medium-zerotwo | ce72032c419638325d5c837d7c40a9aece456330 | 2021-08-23T17:12:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | HAttORi | null | HAttORi/DialoGPT-Medium-zerotwo | 9 | null | transformers | 12,066 | ---
tags:
- conversational
---
# Zero Two DialoGPT Model |
Helsinki-NLP/opus-mt-af-fi | f566aa394e721e8f0c16afd99249be68267c06d5 | 2021-09-09T21:26:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-af-fi | 9 | null | transformers | 12,067 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-af-fi
* source languages: af
* target languages: fi
* OPUS readme: [af-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.fi | 32.3 | 0.576 |
|
Helsinki-NLP/opus-mt-bg-fi | 04d4dd3690cc730690da31b45745fb3f74198b0f | 2021-09-09T21:27:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bg-fi | 9 | null | transformers | 12,068 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bg-fi
* source languages: bg
* target languages: fi
* OPUS readme: [bg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.fi | 23.7 | 0.505 |
|
Helsinki-NLP/opus-mt-bg-fr | 400f439187067856647d8f7fb7f77af07d8bd260 | 2021-01-18T07:50:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bg-fr | 9 | null | transformers | 12,069 | ---
language:
- bg
- fr
tags:
- translation
license: apache-2.0
---
### bul-fra
* source group: Bulgarian
* target group: French
* OPUS readme: [bul-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md)
* model: transformer
* source language(s): bul
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.fra | 53.7 | 0.693 |
### System Info:
- hf_name: bul-fra
- source_languages: bul
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'fr']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: fra
- short_pair: bg-fr
- chrF2_score: 0.693
- bleu: 53.7
- brevity_penalty: 0.977
- ref_len: 3669.0
- src_name: Bulgarian
- tgt_name: French
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: fr
- prefer_old: False
- long_pair: bul-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bi-fr | 31712329599ad7b50590cd35299ccc8d94029122 | 2021-09-09T21:27:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bi",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bi-fr | 9 | null | transformers | 12,070 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bi-fr
* source languages: bi
* target languages: fr
* OPUS readme: [bi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.fr | 21.5 | 0.382 |
|
Helsinki-NLP/opus-mt-chk-fr | 6db3456d236063ccbb97abdea52dc574da37a898 | 2021-09-09T21:28:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"chk",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-chk-fr | 9 | null | transformers | 12,071 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-chk-fr
* source languages: chk
* target languages: fr
* OPUS readme: [chk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.fr | 22.4 | 0.387 |
|
Helsinki-NLP/opus-mt-csn-es | c3086bbf7d9101947a5a07d286cb9ccc533f9e0a | 2021-09-09T21:29:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"csn",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-csn-es | 9 | null | transformers | 12,072 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-csn-es
* source languages: csn
* target languages: es
* OPUS readme: [csn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/csn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/csn-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/csn-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/csn-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.csn.es | 87.4 | 0.899 |
|
Helsinki-NLP/opus-mt-de-bi | 7c40aed9a4611cec93aa9560f2bb99e49e895789 | 2021-09-09T21:30:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"bi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-bi | 9 | null | transformers | 12,073 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-bi
* source languages: de
* target languages: bi
* OPUS readme: [de-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.bi | 25.7 | 0.450 |
|
Helsinki-NLP/opus-mt-de-efi | 1309ccb2f74acba991a654adf4ff1363a577d51b | 2021-09-09T21:30:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"efi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-efi | 9 | null | transformers | 12,074 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-efi
* source languages: de
* target languages: efi
* OPUS readme: [de-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.efi | 24.2 | 0.451 |
|
Helsinki-NLP/opus-mt-de-gaa | 0722f96d5ce2e9fd6b2e0df3987105a78d062d1c | 2021-09-09T21:31:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"gaa",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-gaa | 9 | null | transformers | 12,075 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-gaa
* source languages: de
* target languages: gaa
* OPUS readme: [de-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-gaa/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gaa/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gaa/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.gaa | 26.3 | 0.471 |
|
Helsinki-NLP/opus-mt-de-gil | 56bb25bf50c7b8268c9fd1ec8f8124e54631af59 | 2021-09-09T21:31:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"gil",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-gil | 9 | null | transformers | 12,076 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-gil
* source languages: de
* target languages: gil
* OPUS readme: [de-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-gil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-gil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.gil | 24.0 | 0.472 |
|
Helsinki-NLP/opus-mt-de-ln | 05dd393385fb99c42d5849c22cef67931922eff3 | 2021-09-09T21:32:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"ln",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-ln | 9 | null | transformers | 12,077 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ln
* source languages: de
* target languages: ln
* OPUS readme: [de-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ln/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ln | 26.7 | 0.504 |
|
Helsinki-NLP/opus-mt-ee-fi | 8547cfc9f2c5ef75f00c78ef563eef59fc0204ee | 2021-09-09T21:33:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ee",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ee-fi | 9 | null | transformers | 12,078 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ee-fi
* source languages: ee
* target languages: fi
* OPUS readme: [ee-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.fi | 25.0 | 0.482 |
|
Helsinki-NLP/opus-mt-efi-fr | 7b528531e45c04716015e7c211ef2b74817ff438 | 2021-09-09T21:33:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"efi",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-efi-fr | 9 | null | transformers | 12,079 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-efi-fr
* source languages: efi
* target languages: fr
* OPUS readme: [efi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.fr | 25.1 | 0.419 |
|
Helsinki-NLP/opus-mt-en-cus | 495278af0387c6122abe44c4ef7b1c48ef62da66 | 2021-01-18T08:06:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"so",
"cus",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-cus | 9 | null | transformers | 12,080 | ---
language:
- en
- so
- cus
tags:
- translation
license: apache-2.0
---
### eng-cus
* source group: English
* target group: Cushitic languages
* OPUS readme: [eng-cus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md)
* model: transformer
* source language(s): eng
* target language(s): som
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.multi | 16.0 | 0.173 |
| Tatoeba-test.eng-som.eng.som | 16.0 | 0.173 |
### System Info:
- hf_name: eng-cus
- source_languages: eng
- target_languages: cus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'so', 'cus']
- src_constituents: {'eng'}
- tgt_constituents: {'som'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: cus
- short_pair: en-cus
- chrF2_score: 0.17300000000000001
- bleu: 16.0
- brevity_penalty: 1.0
- ref_len: 3.0
- src_name: English
- tgt_name: Cushitic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: cus
- prefer_old: False
- long_pair: eng-cus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-fj | 2c98ee541817946993595aa514f12804b6c95efc | 2021-09-09T21:35:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"fj",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-fj | 9 | null | transformers | 12,081 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-fj
* source languages: en
* target languages: fj
* OPUS readme: [en-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.fj | 34.0 | 0.561 |
| Tatoeba.en.fj | 62.5 | 0.781 |
|
Helsinki-NLP/opus-mt-en-gmw | 11cd92347e176fdba93f37ea5af1367109d52516 | 2021-01-18T08:08:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"nl",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-gmw | 9 | null | transformers | 12,082 | ---
language:
- en
- nl
- lb
- af
- de
- fy
- yi
- gmw
tags:
- translation
license: apache-2.0
---
### eng-gmw
* source group: English
* target group: West Germanic languages
* OPUS readme: [eng-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engdeu.eng.deu | 21.4 | 0.518 |
| news-test2008-engdeu.eng.deu | 21.0 | 0.510 |
| newstest2009-engdeu.eng.deu | 20.4 | 0.513 |
| newstest2010-engdeu.eng.deu | 22.9 | 0.528 |
| newstest2011-engdeu.eng.deu | 20.5 | 0.508 |
| newstest2012-engdeu.eng.deu | 21.0 | 0.507 |
| newstest2013-engdeu.eng.deu | 24.7 | 0.533 |
| newstest2015-ende-engdeu.eng.deu | 28.2 | 0.568 |
| newstest2016-ende-engdeu.eng.deu | 33.3 | 0.605 |
| newstest2017-ende-engdeu.eng.deu | 26.5 | 0.559 |
| newstest2018-ende-engdeu.eng.deu | 39.9 | 0.649 |
| newstest2019-ende-engdeu.eng.deu | 35.9 | 0.616 |
| Tatoeba-test.eng-afr.eng.afr | 55.7 | 0.740 |
| Tatoeba-test.eng-ang.eng.ang | 6.5 | 0.164 |
| Tatoeba-test.eng-deu.eng.deu | 40.4 | 0.614 |
| Tatoeba-test.eng-enm.eng.enm | 2.3 | 0.254 |
| Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.248 |
| Tatoeba-test.eng-fry.eng.fry | 17.9 | 0.424 |
| Tatoeba-test.eng-gos.eng.gos | 2.2 | 0.309 |
| Tatoeba-test.eng-gsw.eng.gsw | 1.6 | 0.186 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.5 | 0.189 |
| Tatoeba-test.eng-ltz.eng.ltz | 20.2 | 0.383 |
| Tatoeba-test.eng.multi | 41.6 | 0.609 |
| Tatoeba-test.eng-nds.eng.nds | 18.9 | 0.437 |
| Tatoeba-test.eng-nld.eng.nld | 53.1 | 0.699 |
| Tatoeba-test.eng-pdc.eng.pdc | 7.7 | 0.262 |
| Tatoeba-test.eng-sco.eng.sco | 37.7 | 0.557 |
| Tatoeba-test.eng-stq.eng.stq | 5.9 | 0.380 |
| Tatoeba-test.eng-swg.eng.swg | 6.2 | 0.236 |
| Tatoeba-test.eng-yid.eng.yid | 6.8 | 0.296 |
### System Info:
- hf_name: eng-gmw
- source_languages: eng
- target_languages: gmw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'eng'}
- tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gmw
- short_pair: en-gmw
- chrF2_score: 0.609
- bleu: 41.6
- brevity_penalty: 0.9890000000000001
- ref_len: 74922.0
- src_name: English
- tgt_name: West Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gmw
- prefer_old: False
- long_pair: eng-gmw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-phi | 02fc4c73124d7c36e9e2d3c2fb6939591cff415b | 2021-01-18T08:14:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"phi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-phi | 9 | null | transformers | 12,083 | ---
language:
- en
- phi
tags:
- translation
license: apache-2.0
---
### eng-phi
* source group: English
* target group: Philippine languages
* OPUS readme: [eng-phi](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb hil ilo pag war
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 7.1 | 0.245 |
| Tatoeba-test.eng-ceb.eng.ceb | 10.5 | 0.435 |
| Tatoeba-test.eng-hil.eng.hil | 18.0 | 0.506 |
| Tatoeba-test.eng-ilo.eng.ilo | 33.4 | 0.590 |
| Tatoeba-test.eng.multi | 13.1 | 0.392 |
| Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.481 |
| Tatoeba-test.eng-war.eng.war | 12.8 | 0.441 |
### System Info:
- hf_name: eng-phi
- source_languages: eng
- target_languages: phi
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'phi']
- src_constituents: {'eng'}
- tgt_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: phi
- short_pair: en-phi
- chrF2_score: 0.392
- bleu: 13.1
- brevity_penalty: 1.0
- ref_len: 30022.0
- src_name: English
- tgt_name: Philippine languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: phi
- prefer_old: False
- long_pair: eng-phi
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-to | f5a9081211432e18c83753cb0d9a8cbf6c389067 | 2021-09-09T21:40:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"to",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-to | 9 | null | transformers | 12,084 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-to
* source languages: en
* target languages: to
* OPUS readme: [en-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.to | 56.3 | 0.689 |
|
Helsinki-NLP/opus-mt-eo-cs | 4bf5467a59411b10737527867d59f1a5549c8a5e | 2021-01-18T08:20:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"cs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-cs | 9 | null | transformers | 12,085 | ---
language:
- eo
- cs
tags:
- translation
license: apache-2.0
---
### epo-ces
* source group: Esperanto
* target group: Czech
* OPUS readme: [epo-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ces
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ces | 17.5 | 0.376 |
### System Info:
- hf_name: epo-ces
- source_languages: epo
- target_languages: ces
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'cs']
- src_constituents: {'epo'}
- tgt_constituents: {'ces'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ces
- short_pair: eo-cs
- chrF2_score: 0.376
- bleu: 17.5
- brevity_penalty: 0.922
- ref_len: 22148.0
- src_name: Esperanto
- tgt_name: Czech
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: cs
- prefer_old: False
- long_pair: epo-ces
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-eo-ru | b728dc341a0c961f0978a64dbee5af14a4d33f48 | 2021-01-18T08:21:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-ru | 9 | null | transformers | 12,086 | ---
language:
- eo
- ru
tags:
- translation
license: apache-2.0
---
### epo-rus
* source group: Esperanto
* target group: Russian
* OPUS readme: [epo-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.rus | 17.7 | 0.379 |
### System Info:
- hf_name: epo-rus
- source_languages: epo
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ru']
- src_constituents: {'epo'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: rus
- short_pair: eo-ru
- chrF2_score: 0.379
- bleu: 17.7
- brevity_penalty: 0.9179999999999999
- ref_len: 71288.0
- src_name: Esperanto
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ru
- prefer_old: False
- long_pair: epo-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-efi | f90e545aa2ad5dd3c2786ac4413b77f99fe96257 | 2021-09-09T21:42:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"efi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-efi | 9 | null | transformers | 12,087 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-efi
* source languages: es
* target languages: efi
* OPUS readme: [es-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.efi | 24.6 | 0.452 |
|
Helsinki-NLP/opus-mt-es-gaa | ed51dbff78c4ce9e4d16935b14a36073953ae4cd | 2021-09-09T21:42:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"gaa",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-gaa | 9 | null | transformers | 12,088 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-gaa
* source languages: es
* target languages: gaa
* OPUS readme: [es-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-gaa/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gaa/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gaa/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.gaa | 27.8 | 0.479 |
|
Helsinki-NLP/opus-mt-es-lt | 45e6a8c9b0eb25e62ca8d18df0edb8550bd96eb7 | 2021-01-18T08:26:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"lt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-lt | 9 | null | transformers | 12,089 | ---
language:
- es
- lt
tags:
- translation
license: apache-2.0
---
### spa-lit
* source group: Spanish
* target group: Lithuanian
* OPUS readme: [spa-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-lit/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.lit | 40.2 | 0.643 |
### System Info:
- hf_name: spa-lit
- source_languages: spa
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'lt']
- src_constituents: {'spa'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: lit
- short_pair: es-lt
- chrF2_score: 0.643
- bleu: 40.2
- brevity_penalty: 0.956
- ref_len: 2341.0
- src_name: Spanish
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: lt
- prefer_old: False
- long_pair: spa-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-lua | 1e47438ff46e6599da6997b6f6cbe74001b94b49 | 2021-09-09T21:43:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"lua",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-lua | 9 | null | transformers | 12,090 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-lua
* source languages: es
* target languages: lua
* OPUS readme: [es-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.lua | 23.4 | 0.473 |
|
Helsinki-NLP/opus-mt-es-nso | d39fdafe118ead41c25bd8393901c15029d1714c | 2021-09-09T21:43:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"nso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-nso | 9 | null | transformers | 12,091 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-nso
* source languages: es
* target languages: nso
* OPUS readme: [es-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.nso | 33.2 | 0.531 |
|
Helsinki-NLP/opus-mt-es-sm | 6a03599c80a8375487939fe71193ffd92e651a7b | 2021-09-09T21:44:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"sm",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-sm | 9 | null | transformers | 12,092 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-sm
* source languages: es
* target languages: sm
* OPUS readme: [es-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-sm/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sm/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sm/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.sm | 25.5 | 0.450 |
|
Helsinki-NLP/opus-mt-es-tpi | ec0f575edd1d01c2283eb3338ae12e3c51a96353 | 2021-09-09T21:45:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"tpi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-tpi | 9 | null | transformers | 12,093 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tpi
* source languages: es
* target languages: tpi
* OPUS readme: [es-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tpi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tpi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tpi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tpi | 27.0 | 0.472 |
|
Helsinki-NLP/opus-mt-es-war | 31ec114a50009c31ba4fa53bfc1770c5405f6fb3 | 2021-09-09T21:45:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"war",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-war | 9 | null | transformers | 12,094 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-war
* source languages: es
* target languages: war
* OPUS readme: [es-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.war | 31.7 | 0.530 |
|
Helsinki-NLP/opus-mt-et-ru | a6a1ecdab9ebb448e43c87f281c19f36ed7656f2 | 2021-01-18T08:30:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"et",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-et-ru | 9 | null | transformers | 12,095 | ---
language:
- et
- ru
tags:
- translation
license: apache-2.0
---
### est-rus
* source group: Estonian
* target group: Russian
* OPUS readme: [est-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/est-rus/README.md)
* model: transformer-align
* source language(s): est
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/est-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/est-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/est-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.est.rus | 50.2 | 0.702 |
### System Info:
- hf_name: est-rus
- source_languages: est
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/est-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['et', 'ru']
- src_constituents: {'est'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/est-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/est-rus/opus-2020-06-17.test.txt
- src_alpha3: est
- tgt_alpha3: rus
- short_pair: et-ru
- chrF2_score: 0.7020000000000001
- bleu: 50.2
- brevity_penalty: 0.988
- ref_len: 3569.0
- src_name: Estonian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: et
- tgt_alpha2: ru
- prefer_old: False
- long_pair: est-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-et-sv | 67259de3338ab3aac7729000903aa1d653b4129f | 2021-09-09T21:46:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"et",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-et-sv | 9 | null | transformers | 12,096 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-et-sv
* source languages: et
* target languages: sv
* OPUS readme: [et-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.et.sv | 28.9 | 0.513 |
|
Helsinki-NLP/opus-mt-fi-crs | a29ce204522a57c59a19a3eacaff897351fcd859 | 2021-09-09T21:46:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"crs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-crs | 9 | null | transformers | 12,097 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-crs
* source languages: fi
* target languages: crs
* OPUS readme: [fi-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-crs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-crs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-crs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.crs | 29.6 | 0.491 |
|
Helsinki-NLP/opus-mt-fi-guw | fea94fe902a74d8be70eb590aa3304a42a10da14 | 2021-09-09T21:47:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"guw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-guw | 9 | null | transformers | 12,098 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-guw
* source languages: fi
* target languages: guw
* OPUS readme: [fi-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-guw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-guw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-guw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.guw | 32.4 | 0.527 |
|
Helsinki-NLP/opus-mt-fi-it | da46e9f066abd8c179773bb806af9be159b86f37 | 2021-09-09T21:48:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-it | 9 | null | transformers | 12,099 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-it
* source languages: fi
* target languages: it
* OPUS readme: [fi-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-it/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-it/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-it/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.it | 42.7 | 0.657 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.