modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
β | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
β | likes
float64 0
712
β | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
uer/roberta-small-wwm-chinese-cluecorpussmall | f61672c1a9b3b841a1b83313cdf35f33d348dc36 | 2022-07-18T05:43:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | uer | null | uer/roberta-small-wwm-chinese-cluecorpussmall | 15 | null | transformers | 9,700 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "εδΊ¬ζ―[MASK]ε½ηι¦ι½γ"
---
# Chinese Whole Word Masking RoBERTa Miniatures
## Model description
This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **Tiny** | [**2/128 (Tiny)**][2_128] |
| **Mini** | [**4/256 (Mini)**][4_256] |
| **Small** | [**4/512 (Small)**][4_512] |
| **Medium** | [**8/512 (Medium)**][8_512] |
| **Base** | [**12/768 (Base)**][12_768] |
| **Large** | [**24/1024 (Large)**][24_1024] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny-WWM | 72.1 | 82.8 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 |
| RoBERTa-Mini-WWM | 76.1 | 84.9 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 |
| RoBERTa-Small-WWM | 77.3 | 86.8 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 |
| RoBERTa-Medium-WWM | 78.4 | 88.2 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 |
| RoBERTa-Base-WWM | 80.1 | 90.0 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 |
| RoBERTa-Large-WWM | 81.0 | 90.4 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall')
>>> unmasker("εδΊ¬ζ―[MASK]ε½ηι¦ι½γ")
[
{'score': 0.294228732585907,
'token': 704,
'token_str': 'δΈ',
'sequence': 'ε δΊ¬ ζ― δΈ ε½ η ι¦ ι½ γ'},
{'score': 0.19691626727581024,
'token': 1266,
'token_str': 'ε',
'sequence': 'ε δΊ¬ ζ― ε ε½ η ι¦ ι½ γ'},
{'score': 0.1070084273815155,
'token': 7506,
'token_str': 'ι©',
'sequence': 'ε δΊ¬ ζ― ι© ε½ η ι¦ ι½ γ'},
{'score': 0.031527262181043625,
'token': 2769,
'token_str': 'ζ',
'sequence': 'ε δΊ¬ ζ― ζ ε½ η ι¦ ι½ γ'},
{'score': 0.023054633289575577,
'token': 1298,
'token_str': 'ε',
'sequence': 'ε δΊ¬ ζ― ε ε½ η ι¦ ι½ γ'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "η¨δ½ εζ¬’ηδ»»δ½ζζ¬ζΏζ’ζγ"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "η¨δ½ εζ¬’ηδ»»δ½ζζ¬ζΏζ’ζγ"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
[jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
Taking the case of Whole Word Masking RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall
[24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall |
uer/roberta-base-wwm-chinese-cluecorpussmall | 6b014f1b0ba595f776e8c6a7b26da78685304b92 | 2022-07-18T05:50:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | uer | null | uer/roberta-base-wwm-chinese-cluecorpussmall | 15 | null | transformers | 9,701 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "εδΊ¬ζ―[MASK]ε½ηι¦ι½γ"
---
# Chinese Whole Word Masking RoBERTa Miniatures
## Model description
This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **Tiny** | [**2/128 (Tiny)**][2_128] |
| **Mini** | [**4/256 (Mini)**][4_256] |
| **Small** | [**4/512 (Small)**][4_512] |
| **Medium** | [**8/512 (Medium)**][8_512] |
| **Base** | [**12/768 (Base)**][12_768] |
| **Large** | [**24/1024 (Large)**][24_1024] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny-WWM | 72.1 | 82.8 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 |
| RoBERTa-Mini-WWM | 76.1 | 84.9 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 |
| RoBERTa-Small-WWM | 77.3 | 86.8 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 |
| RoBERTa-Medium-WWM | 78.4 | 88.2 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 |
| RoBERTa-Base-WWM | 80.1 | 90.0 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 |
| RoBERTa-Large-WWM | 81.0 | 90.4 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall')
>>> unmasker("εδΊ¬ζ―[MASK]ε½ηι¦ι½γ")
[
{'score': 0.294228732585907,
'token': 704,
'token_str': 'δΈ',
'sequence': 'ε δΊ¬ ζ― δΈ ε½ η ι¦ ι½ γ'},
{'score': 0.19691626727581024,
'token': 1266,
'token_str': 'ε',
'sequence': 'ε δΊ¬ ζ― ε ε½ η ι¦ ι½ γ'},
{'score': 0.1070084273815155,
'token': 7506,
'token_str': 'ι©',
'sequence': 'ε δΊ¬ ζ― ι© ε½ η ι¦ ι½ γ'},
{'score': 0.031527262181043625,
'token': 2769,
'token_str': 'ζ',
'sequence': 'ε δΊ¬ ζ― ζ ε½ η ι¦ ι½ γ'},
{'score': 0.023054633289575577,
'token': 1298,
'token_str': 'ε',
'sequence': 'ε δΊ¬ ζ― ε ε½ η ι¦ ι½ γ'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "η¨δ½ εζ¬’ηδ»»δ½ζζ¬ζΏζ’ζγ"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
text = "η¨δ½ εζ¬’ηδ»»δ½ζζ¬ζΏζ’ζγ"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
[jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
Taking the case of Whole Word Masking RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--whole_word_masking \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall
[24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall |
anahitapld/dbd_electra | b9a5333f6a2d64ca551ab2181596d3b274d189fc | 2022-07-18T09:05:12.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | anahitapld | null | anahitapld/dbd_electra | 15 | null | transformers | 9,702 | ---
license: apache-2.0
---
|
anahitapld/dbd_Roberta | 7418bc8d2412875b72045d42a2da3ad5a299e968 | 2022-07-18T09:16:48.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | anahitapld | null | anahitapld/dbd_Roberta | 15 | null | transformers | 9,703 | ---
license: apache-2.0
---
|
translationtech/nllb_distilled | 8bb35f68a86604f3f99ea3bf7667c9017b9fecea | 2022-07-18T15:42:02.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:cc-by-nc-4.0",
"autotrain_compatible"
]
| text2text-generation | false | translationtech | null | translationtech/nllb_distilled | 15 | null | transformers | 9,704 | ---
license: cc-by-nc-4.0
---
|
Evelyn18/roberta-base-spanish-squades-modelo-robertav1 | f984c2c4aa73fc78af3adad55152839e7e6a31f2 | 2022-07-19T18:29:08.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-modelo-robertav1 | 15 | null | transformers | 9,705 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-modelo-robertav1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-modelo-robertav1
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 1.8825 |
| No log | 2.0 | 12 | 1.7787 |
| No log | 3.0 | 18 | 2.0521 |
| No log | 4.0 | 24 | 2.2991 |
| No log | 5.0 | 30 | 2.4029 |
| No log | 6.0 | 36 | 2.4358 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cep-ter/fine-tune-MonoTransQuest-fa | 945a4c7dbae58b1f12976f064793790a27abc10b | 2022-07-20T03:20:41.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | cep-ter | null | cep-ter/fine-tune-MonoTransQuest-fa | 15 | null | transformers | 9,706 | Entry not found |
commanderstrife/bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner | 6a7aa37ee883b3b1a855a5351f4517c9208ba211 | 2022-07-20T12:03:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:bc4chemd_ner",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | commanderstrife | null | commanderstrife/bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner | 15 | null | transformers | 9,707 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- bc4chemd_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: bc4chemd_ner
type: bc4chemd_ner
args: bc4chemd
metrics:
- name: Precision
type: precision
value: 0.8944236722550557
- name: Recall
type: recall
value: 0.8777321865383098
- name: F1
type: f1
value: 0.8859993229654115
- name: Accuracy
type: accuracy
value: 0.9908228496683563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the bc4chemd_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0641
- Precision: 0.8944
- Recall: 0.8777
- F1: 0.8860
- Accuracy: 0.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.006 | 1.0 | 1918 | 0.0310 | 0.8697 | 0.8510 | 0.8602 | 0.9894 |
| 0.0097 | 2.0 | 3836 | 0.0345 | 0.8855 | 0.8637 | 0.8745 | 0.9898 |
| 0.0058 | 3.0 | 5754 | 0.0359 | 0.8733 | 0.8836 | 0.8784 | 0.9902 |
| 0.0014 | 4.0 | 7672 | 0.0440 | 0.8723 | 0.8842 | 0.8782 | 0.9903 |
| 0.0005 | 5.0 | 9590 | 0.0539 | 0.8862 | 0.8673 | 0.8766 | 0.9903 |
| 0.0001 | 6.0 | 11508 | 0.0558 | 0.8939 | 0.8628 | 0.8781 | 0.9904 |
| 0.0001 | 7.0 | 13426 | 0.0558 | 0.8846 | 0.8729 | 0.8787 | 0.9903 |
| 0.0012 | 8.0 | 15344 | 0.0635 | 0.8935 | 0.8696 | 0.8814 | 0.9905 |
| 0.0 | 9.0 | 17262 | 0.0624 | 0.8897 | 0.8831 | 0.8864 | 0.9908 |
| 0.0002 | 10.0 | 19180 | 0.0641 | 0.8944 | 0.8777 | 0.8860 | 0.9908 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jaeyeon/korean-aihub-learning-2 | b5f2690421054e629d81a386d1c129e841816c61 | 2022-07-20T08:31:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | jaeyeon | null | jaeyeon/korean-aihub-learning-2 | 15 | null | transformers | 9,708 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: korean-aihub-learning-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean-aihub-learning-2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9945
- Wer: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 35 | 46.3840 | 1.0 |
| No log | 1.99 | 70 | 26.0949 | 1.0 |
| 37.1581 | 2.99 | 105 | 19.0168 | 1.0 |
| 37.1581 | 3.99 | 140 | 13.3294 | 1.0 |
| 37.1581 | 4.99 | 175 | 7.9410 | 1.0 |
| 12.5054 | 5.99 | 210 | 5.0323 | 1.0 |
| 12.5054 | 6.99 | 245 | 4.6242 | 1.0 |
| 12.5054 | 7.99 | 280 | 4.6206 | 1.0 |
| 4.8394 | 8.99 | 315 | 4.5820 | 1.0 |
| 4.8394 | 9.99 | 350 | 4.5629 | 1.0 |
| 4.8394 | 10.99 | 385 | 4.5385 | 1.0 |
| 4.6489 | 11.99 | 420 | 4.5627 | 1.0 |
| 4.6489 | 12.99 | 455 | 4.5276 | 1.0 |
| 4.6489 | 13.99 | 490 | 4.5292 | 1.0 |
| 4.5654 | 14.99 | 525 | 4.5179 | 1.0 |
| 4.5654 | 15.99 | 560 | 4.4928 | 1.0 |
| 4.5654 | 16.99 | 595 | 4.4791 | 1.0 |
| 4.521 | 17.99 | 630 | 4.4649 | 1.0 |
| 4.521 | 18.99 | 665 | 4.4588 | 1.0 |
| 4.3529 | 19.99 | 700 | 4.3632 | 1.0 |
| 4.3529 | 20.99 | 735 | 4.2990 | 1.0 |
| 4.3529 | 21.99 | 770 | 4.2326 | 0.9988 |
| 4.1301 | 22.99 | 805 | 4.0843 | 1.0 |
| 4.1301 | 23.99 | 840 | 3.9784 | 0.9975 |
| 4.1301 | 24.99 | 875 | 3.7876 | 1.0 |
| 3.7047 | 25.99 | 910 | 3.6109 | 0.9988 |
| 3.7047 | 26.99 | 945 | 3.4049 | 0.9828 |
| 3.7047 | 27.99 | 980 | 3.1913 | 0.9606 |
| 3.006 | 28.99 | 1015 | 3.0567 | 0.9508 |
| 3.006 | 29.99 | 1050 | 2.9945 | 0.9533 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Sameen53/CV_bn_trained_on_Train_0.3 | 6648f4491c3892aea6b1fbf4c3c792923b2c88c5 | 2022-07-30T07:46:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | Sameen53 | null | Sameen53/CV_bn_trained_on_Train_0.3 | 15 | null | transformers | 9,709 | ---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: CV_bn_trained_on_Train_0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CV_bn_trained_on_Train_0.3
This model is a fine-tuned version of [Lancelot53/CV_bn_trained_on_Validation](https://huggingface.co/Lancelot53/CV_bn_trained_on_Validation) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.3415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0435 | 1.22 | 4000 | inf | 0.3415 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Professor/wav2vec2-base-960h-finetuned | 168d24dad032f62780ea3009502378b06a87c829 | 2022-07-21T03:00:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | Professor | null | Professor/wav2vec2-base-960h-finetuned | 15 | null | transformers | 9,710 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-960h-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1430
- Accuracy: 0.6516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5958 | 1.0 | 203 | 2.4754 | 0.2714 |
| 2.0809 | 2.0 | 406 | 1.9972 | 0.3930 |
| 1.8486 | 3.0 | 609 | 1.6918 | 0.4658 |
| 1.5857 | 4.0 | 812 | 1.5089 | 0.5186 |
| 1.4819 | 5.0 | 1015 | 1.4027 | 0.5508 |
| 1.3859 | 6.0 | 1218 | 1.3146 | 0.5867 |
| 1.3448 | 7.0 | 1421 | 1.2078 | 0.6281 |
| 1.2551 | 8.0 | 1624 | 1.1600 | 0.6447 |
| 1.1506 | 9.0 | 1827 | 1.1595 | 0.6512 |
| 1.2435 | 10.0 | 2030 | 1.1430 | 0.6516 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ahmed007/mt5-small-ibn-Shaddad-v4 | c7bec6a31bc0766ca3b54595b1db06e7e5756be3 | 2022-07-21T05:04:38.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"Poet",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Ahmed007 | null | Ahmed007/mt5-small-ibn-Shaddad-v4 | 15 | null | transformers | 9,711 | ---
license: apache-2.0
tags:
- Poet
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-ibn-Shaddad-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-ibn-Shaddad-v4
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9233
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 5.0001 | 1.0 | 935 | 3.1102 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.4066 | 2.0 | 1870 | 2.9836 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.2832 | 3.0 | 2805 | 2.9384 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.2334 | 4.0 | 3740 | 2.9233 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dl4nlp/distilbert-base-uncased-nq-short-for-square | 1fd46acdc80c7af2688f8a95b8ef82506f6b3aeb | 2022-07-22T20:26:54.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | dl4nlp | null | dl4nlp/distilbert-base-uncased-nq-short-for-square | 15 | null | transformers | 9,712 | Entry not found |
jcashmoney123/autotrain-amazon-summarization-1170943400 | 60d89b59a2ab4ea2f6de0534f378ac6ba5289d99 | 2022-07-23T18:06:12.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:jcashmoney123/autotrain-data-amazon-summarization",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | jcashmoney123 | null | jcashmoney123/autotrain-amazon-summarization-1170943400 | 15 | null | transformers | 9,713 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain π€"
datasets:
- jcashmoney123/autotrain-data-amazon-summarization
co2_eq_emissions: 25.718350806012065
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1170943400
- CO2 Emissions (in grams): 25.718350806012065
## Validation Metrics
- Loss: 2.569204092025757
- Rouge1: 21.072
- Rouge2: 6.2072
- RougeL: 18.9156
- RougeLsum: 18.8997
- Gen Len: 10.7165
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/jcashmoney123/autotrain-amazon-summarization-1170943400
``` |
SIMAS-UN/blaming_vulnerability | 59e219e164dc36f6b8965dc9e98a8859fee5e298 | 2022-07-24T04:07:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | SIMAS-UN | null | SIMAS-UN/blaming_vulnerability | 15 | null | transformers | 9,714 | Entry not found |
ccdv/lsg-xlm-roberta-base-4096 | 90fb5adb207bd5d0d7e54bcffc7e2f4eb3bfe895 | 2022-07-26T20:14:38.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"en",
"arxiv:2105.00572",
"transformers",
"long context",
"autotrain_compatible"
]
| fill-mask | false | ccdv | null | ccdv/lsg-xlm-roberta-base-4096 | 15 | null | transformers | 9,715 | ---
language: en
tags:
- xlm-roberta
- long context
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is adapted from [XLM-RoBERTa-base](https://huggingface.co/xlm-roberta-base) model without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Support encoder-decoder but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-xlm-roberta-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-xlm-roberta-base-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-xlm-roberta-base-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-xlm-roberta-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-xlm-roberta-base-4096")
SENTENCES = ["Paris is the <mask> of France."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-xlm-roberta-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-xlm-roberta-base-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-xlm-roberta-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-xlm-roberta-base-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
**XLM-RoBERTa**
```
@article{DBLP:journals/corr/abs-2105-00572,
author = {Naman Goyal and
Jingfei Du and
Myle Ott and
Giri Anantharaman and
Alexis Conneau},
title = {Larger-Scale Transformers for Multilingual Masked Language Modeling},
journal = {CoRR},
volume = {abs/2105.00572},
year = {2021},
url = {https://arxiv.org/abs/2105.00572},
eprinttype = {arXiv},
eprint = {2105.00572},
timestamp = {Wed, 12 May 2021 15:54:31 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-00572.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
MiriUll/distilbert-german-text-complexity | 1a1b02707747331fd5c6ecda8e7e34c85f7b56fa | 2022-07-25T13:55:29.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | MiriUll | null | MiriUll/distilbert-german-text-complexity | 15 | null | transformers | 9,716 | language: de
This is a version of "distilbert-base-german-cased" fine-tuned for text complexity prediction on a scale between 1 and 7. |
mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_empathy_classifier | 683570f63b6c53eccc0098e6900cf9657a9a090f | 2022-07-25T15:10:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | mmillet | null | mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_empathy_classifier | 15 | null | transformers | 9,717 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-cased-conversational-v1_single_finetuned_empathy_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-cased-conversational-v1_single_finetuned_empathy_classifier
This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0183
- Accuracy: 0.6218
- F1: 0.6262
- Precision: 0.6318
- Recall: 0.6218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0456 | 1.0 | 9 | 0.9718 | 0.4958 | 0.4197 | 0.6526 | 0.4958 |
| 0.9042 | 2.0 | 18 | 0.8920 | 0.5882 | 0.5769 | 0.5784 | 0.5882 |
| 0.7923 | 3.0 | 27 | 0.8427 | 0.6134 | 0.5861 | 0.5935 | 0.6134 |
| 0.7544 | 4.0 | 36 | 0.8400 | 0.6387 | 0.6234 | 0.6344 | 0.6387 |
| 0.6675 | 5.0 | 45 | 0.8410 | 0.6303 | 0.6095 | 0.6184 | 0.6303 |
| 0.6091 | 6.0 | 54 | 0.9095 | 0.6050 | 0.6041 | 0.6396 | 0.6050 |
| 0.6279 | 7.0 | 63 | 0.8596 | 0.6723 | 0.6692 | 0.6725 | 0.6723 |
| 0.4968 | 8.0 | 72 | 0.8725 | 0.6303 | 0.6274 | 0.6253 | 0.6303 |
| 0.4459 | 9.0 | 81 | 0.9120 | 0.6387 | 0.6395 | 0.6426 | 0.6387 |
| 0.4122 | 10.0 | 90 | 0.9478 | 0.6303 | 0.6262 | 0.6248 | 0.6303 |
| 0.3244 | 11.0 | 99 | 0.9746 | 0.6387 | 0.6375 | 0.6381 | 0.6387 |
| 0.3535 | 12.0 | 108 | 1.0183 | 0.6218 | 0.6262 | 0.6318 | 0.6218 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Evelyn18/roberta-base-spanish-squades-becasIncentivos4 | 9f9262320dd011230216dcceafb80edac51eefac | 2022-07-27T16:52:12.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-becasIncentivos4 | 15 | null | transformers | 9,718 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasIncentivos4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becasIncentivos4
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 11 | 1.8136 |
| No log | 2.0 | 22 | 1.7734 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
anzorq/kbd_lat-ru_char_tokenizer | 36b7ccef5d7de91dbe5b202da358f33763f1fd23 | 2022-07-30T04:19:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | anzorq | null | anzorq/kbd_lat-ru_char_tokenizer | 15 | null | transformers | 9,719 | Entry not found |
Aastha/wav2vec2-large-xls-r-1b-hi | 8f2124b031a266b4f410f1e3d77f1203a82d349b | 2022-02-05T03:46:42.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Aastha | null | Aastha/wav2vec2-large-xls-r-1b-hi | 14 | null | transformers | 9,720 | Entry not found |
Alexander-Learn/bert-finetuned-ner | e689b6a614c2578786bc150d878f082caedcb6c7 | 2022-01-28T08:29:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Alexander-Learn | null | Alexander-Learn/bert-finetuned-ner | 14 | null | transformers | 9,721 | Entry not found |
ArBert/roberta-base-finetuned-ner-kmeans | d3cd105407f27522a59504f2c5a2c7f4262379b5 | 2022-02-12T16:54:18.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ArBert | null | ArBert/roberta-base-finetuned-ner-kmeans | 14 | null | transformers | 9,722 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-kmeans
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.955868544600939
- name: Recall
type: recall
value: 0.9614658103513412
- name: F1
type: f1
value: 0.9586590074394953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-kmeans
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9559
- Recall: 0.9615
- F1: 0.9587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.0248 | 1.0 | 878 | 0.0609 | 0.9507 | 0.9561 | 0.9534 |
| 0.0163 | 2.0 | 1756 | 0.0640 | 0.9515 | 0.9578 | 0.9546 |
| 0.0089 | 3.0 | 2634 | 0.0592 | 0.9559 | 0.9615 | 0.9587 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
BSC-TeMU/roberta-large-bne-capitel-ner | ee42a198b1b798b7bfc3a617e2e24d0d568b07bf | 2021-10-21T10:31:30.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | BSC-TeMU | null | BSC-TeMU/roberta-large-bne-capitel-ner | 14 | null | transformers | 9,723 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
---
**β οΈNOTICEβ οΈ: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-ner
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de EspaΓ±a)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
## Evaluation and results
F1 Score: 0.8998
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier GutiΓ©rrez-FandiΓ±o and Jordi Armengol-EstapΓ© and Marc PΓ mies and Joan Llop-Palao and JoaquΓn Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Barkavi/totto-t5-base-bert-score-121K | 8b2149134039af9c40680a9e0eef55431300c14b | 2022-06-23T09:27:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Barkavi | null | Barkavi/totto-t5-base-bert-score-121K | 14 | null | transformers | 9,724 | **Dataset**
ToTTo is an open-domain English Table-to-Text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table, a set of highlighted table cells, page title and section title as inputs, it produces a one-sentence description summarising the key details from the inputs. This dataset can be taken from hugging face (https://huggingface.co/datasets/totto).
**Model**
The pre-trained Text-to-Text "t5-base" model is fine-tuned with the Table-to-Text ToTTo dataset(downstream task) for the complete train dataset split of around 120,761 examples. During the fine-tuning process for this downstream task, BertScore metric was used as an evaluation metric instead of the standard BLEU metric. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | 7f7956b74338213d5f8f8b68216bd4b3a4fbd56c | 2021-10-18T10:15:57.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | 14 | null | transformers | 9,725 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ΨΉΨ§Ω
Ω Ψ§ΩΩ Ψ'
---
# CAMeLBERT-Mix POS-EGY Model
## Model description
**CAMeLBERT-Mix POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy')
>>> text = 'ΨΉΨ§Ω
Ω Ψ§ΩΩ Ψ'
>>> pos(text)
[{'entity': 'adj', 'score': 0.9972628, 'index': 1, 'word': 'ΨΉΨ§Ω
Ω', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9525163, 'index': 2, 'word': 'Ψ§ΩΩ', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99869114, 'index': 3, 'word': 'Ψ', 'start': 9, 'end': 10}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Davlan/mT5_base_yoruba_adr | 9a6c4a7ba523ea0c6f6e5acc67592b454204e384 | 2021-04-20T21:16:26.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"yo",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2003.10564",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Davlan | null | Davlan/mT5_base_yoruba_adr | 14 | null | transformers | 9,726 | Hugging Face's logo
---
language: yo
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mT5_base_yoruba_adr
## Model description
**mT5_base_yoruba_adr** is a **automatic diacritics restoration** model for YorΓΉbΓ‘ language based on a fine-tuned mT5-base model. It achieves the **state-of-the-art performance** for adding the correct diacritics or tonal marks to YorΓΉbΓ‘ texts.
Specifically, this model is a *mT5_base* model that was fine-tuned on JW300 YorΓΉbΓ‘ corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for ADR.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("")
model = AutoModelForTokenClassification.from_pretrained("")
nlp = pipeline("", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 YorΓΉbΓ‘ corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
64.63 BLEU on [Global Voices test set](https://arxiv.org/abs/2003.10564)
70.27 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647)
### BibTeX entry and citation info
By Jesujoba Alabi and David Adelani
```
```
|
EasthShin/Emotion-Classification-bert-base | f010408833eb76cc3c12373c5aaadc241bddc8bd | 2021-07-26T09:36:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EasthShin | null | EasthShin/Emotion-Classification-bert-base | 14 | null | transformers | 9,727 | Entry not found |
Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian | 84a5dc0cd8e013c41c738fd457e54f3f7bc815a4 | 2022-07-17T17:38:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:Common Voice",
"arxiv:2204.00618",
"transformers",
"audio",
"speech",
"Russian-speech-corpus",
"PyTorch",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Edresson | null | Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian | 14 | 2 | transformers | 9,728 | ---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- Russian-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Russian
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 36.59
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Russian
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Russian using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-russian")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resampl(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("Γ’β¬β’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
GermanT5/german-t5-oscar-ep1-prompted-germanquad | 5176afcca863ecca7862a17eaee4415f1b6ab44d | 2022-01-25T09:03:14.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | GermanT5 | null | GermanT5/german-t5-oscar-ep1-prompted-germanquad | 14 | null | transformers | 9,729 | ---
tags:
- generated_from_trainer
widget:
- text: |
Philipp ist 26 Jahre alt und lebt in NΓΌrnberg, Deutschland. Derzeit arbeitet er als Machine Learning Engineer und Tech Lead bei Hugging Face, um kΓΌnstliche Intelligenz durch Open Source und Open Science zu demokratisieren.
Welches Ziel hat Hugging Face?
model-index:
- name: test-german-t5-prompted-germanquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-german-t5-prompted-germanquad
eval_loss = 0.5907255411148071
eval_rouge1 = 62.0922
eval_rouge2 = 47.2761
eval_rougeL = 61.7706
eval_rougeLsum = 61.8036
eval_runtime = 4501.8065
eval_samples_per_second = 5.487
eval_steps_per_second = 2.743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.18.0
- Tokenizers 0.11.0
|
GroNLP/bert-base-dutch-cased-frisian | 84b8d9d28470f188a14ba2d58efaac361a8b8c8d | 2021-05-18T20:20:35.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"fy",
"arxiv:2105.02855",
"transformers",
"BERTje",
"autotrain_compatible"
]
| fill-mask | false | GroNLP | null | GroNLP/bert-base-dutch-cased-frisian | 14 | 1 | transformers | 9,730 | ---
language: fy
tags:
- BERTje
---
Wietse de Vries β’ Martijn Bartelds β’ Malvina Nissim β’ Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- π [Paper](https://arxiv.org/abs/2105.02855)
- π» [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- π€ [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- π€ [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- π€ [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- π€ [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- π€ [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- π€ [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
Hate-speech-CNERG/deoffxlmr-mono-malyalam | d88bde766cbfe7fffe40a13abf19ce69f27dd885 | 2021-09-25T14:01:42.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ml",
"transformers",
"license:apache-2.0"
]
| text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/deoffxlmr-mono-malyalam | 14 | null | transformers | 9,731 | ---
language: ml
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
Helsinki-NLP/opus-mt-bem-fi | 61e1a5e55e425e7319da8807f50a3c9db95f10d3 | 2021-09-09T21:27:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bem",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bem-fi | 14 | null | transformers | 9,732 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bem-fi
* source languages: bem
* target languages: fi
* OPUS readme: [bem-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fi | 22.8 | 0.439 |
|
Helsinki-NLP/opus-mt-ber-en | 14e47c430cec91689c36c2ff3170353ab9ed9d1d | 2021-09-09T21:27:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ber",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ber-en | 14 | null | transformers | 9,733 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ber-en
* source languages: ber
* target languages: en
* OPUS readme: [ber-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.en | 37.3 | 0.566 |
|
Helsinki-NLP/opus-mt-bzs-es | b03449222edb29b8497af1df03c30782995912f5 | 2021-09-09T21:28:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bzs",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bzs-es | 14 | null | transformers | 9,734 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bzs-es
* source languages: bzs
* target languages: es
* OPUS readme: [bzs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.es | 28.1 | 0.464 |
|
Helsinki-NLP/opus-mt-da-ru | 133f5847df6d5e352d37d86416db133636851c7a | 2021-01-18T07:57:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-da-ru | 14 | null | transformers | 9,735 | ---
language:
- da
- ru
tags:
- translation
license: apache-2.0
---
### dan-rus
* source group: Danish
* target group: Russian
* OPUS readme: [dan-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-rus/README.md)
* model: transformer-align
* source language(s): dan
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan.rus | 52.5 | 0.715 |
### System Info:
- hf_name: dan-rus
- source_languages: dan
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'ru']
- src_constituents: {'dan'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-rus/opus-2020-06-17.test.txt
- src_alpha3: dan
- tgt_alpha3: rus
- short_pair: da-ru
- chrF2_score: 0.715
- bleu: 52.5
- brevity_penalty: 0.991
- ref_len: 10480.0
- src_name: Danish
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: da
- tgt_alpha2: ru
- prefer_old: False
- long_pair: dan-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-de-ca | e335cb19fc68854e5f55ad84f1ade5c38567b10b | 2021-01-18T07:58:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"ca",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-ca | 14 | null | transformers | 9,736 | ---
language:
- de
- ca
tags:
- translation
license: apache-2.0
---
### deu-cat
* source group: German
* target group: Catalan
* OPUS readme: [deu-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-cat/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.cat | 37.4 | 0.582 |
### System Info:
- hf_name: deu-cat
- source_languages: deu
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ca']
- src_constituents: {'deu'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.test.txt
- src_alpha3: deu
- tgt_alpha3: cat
- short_pair: de-ca
- chrF2_score: 0.5820000000000001
- bleu: 37.4
- brevity_penalty: 0.956
- ref_len: 5507.0
- src_name: German
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: de
- tgt_alpha2: ca
- prefer_old: False
- long_pair: deu-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-de-da | bccfbee95d55ba1333fd447f67574453eba5d948 | 2021-09-09T21:30:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"da",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-da | 14 | null | transformers | 9,737 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-da
* source languages: de
* target languages: da
* OPUS readme: [de-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-da/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.da | 57.2 | 0.730 |
|
Helsinki-NLP/opus-mt-el-fi | aef52d8c3cc2129847cf9ea84c62a5e7b9bb41bc | 2021-09-09T21:33:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"el",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-el-fi | 14 | null | transformers | 9,738 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-el-fi
* source languages: el
* target languages: fi
* OPUS readme: [el-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.el.fi | 25.3 | 0.517 |
|
Helsinki-NLP/opus-mt-en-CELTIC | 69fe75e42d848a1b30f968800ff94783e3ed8fe2 | 2021-09-09T21:33:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"cel",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-CELTIC | 14 | null | transformers | 9,739 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-INSULAR_CELTIC
* source languages: en
* target languages: ga,cy,br,gd,kw,gv
* OPUS readme: [en-ga+cy+br+gd+kw+gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ga+cy+br+gd+kw+gv/README.md)
* dataset: opus+techiaith+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus+techiaith+bt-2020-04-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.zip)
* test set translations: [opus+techiaith+bt-2020-04-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.test.txt)
* test set scores: [opus+techiaith+bt-2020-04-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ga | 22.8 | 0.404 |
|
Helsinki-NLP/opus-mt-en-lu | 46019cac051a37cc5b65765c14f26ce600a1709b | 2021-09-09T21:37:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-lu | 14 | null | transformers | 9,740 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lu
* source languages: en
* target languages: lu
* OPUS readme: [en-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lu | 34.1 | 0.564 |
|
Helsinki-NLP/opus-mt-en-pag | 051bcacbbb35a4418e46906d02716606f59a7c91 | 2021-09-09T21:38:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"pag",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-pag | 14 | null | transformers | 9,741 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-pag
* source languages: en
* target languages: pag
* OPUS readme: [en-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pag | 37.9 | 0.598 |
|
Helsinki-NLP/opus-mt-es-da | 661ce74f5258d3bf2f848527d55e3dc33c5793dc | 2021-09-09T21:41:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"da",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-da | 14 | null | transformers | 9,742 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-da
* source languages: es
* target languages: da
* OPUS readme: [es-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-da/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.da | 55.7 | 0.712 |
|
Helsinki-NLP/opus-mt-es-ee | 42f028f43c4aad6eed8ffd96ce928b8badc2014c | 2021-09-09T21:41:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ee",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ee | 14 | null | transformers | 9,743 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ee
* source languages: es
* target languages: ee
* OPUS readme: [es-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ee/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ee/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ee/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ee | 25.6 | 0.470 |
|
Helsinki-NLP/opus-mt-es-eu | f1b18888a188e4eb000c074075dc6ade3f33072a | 2021-01-18T08:23:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"eu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-eu | 14 | 1 | transformers | 9,744 | ---
language:
- es
- eu
tags:
- translation
license: apache-2.0
---
### spa-eus
* source group: Spanish
* target group: Basque
* OPUS readme: [spa-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eus/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.eus | 37.0 | 0.638 |
### System Info:
- hf_name: spa-eus
- source_languages: spa
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'eu']
- src_constituents: {'spa'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: eus
- short_pair: es-eu
- chrF2_score: 0.638
- bleu: 37.0
- brevity_penalty: 0.983
- ref_len: 10945.0
- src_name: Spanish
- tgt_name: Basque
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: eu
- prefer_old: False
- long_pair: spa-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-srn | 3ba24c7ae9f834b1696f7b7a0f53ddc79d583ce4 | 2021-09-09T21:44:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"srn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-srn | 14 | null | transformers | 9,745 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-srn
* source languages: es
* target languages: srn
* OPUS readme: [es-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.srn | 28.7 | 0.487 |
|
Helsinki-NLP/opus-mt-es-tzo | 2fd911169180598677e1b5f42438cc847e145799 | 2021-09-09T21:45:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"tzo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-tzo | 14 | null | transformers | 9,746 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tzo
* source languages: es
* target languages: tzo
* OPUS readme: [es-tzo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tzo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tzo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tzo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tzo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tzo | 22.6 | 0.469 |
|
Helsinki-NLP/opus-mt-fi-bzs | d5b30084df6c669c73224be1ca0f4116c6bdd9b0 | 2021-09-09T21:46:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"bzs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-bzs | 14 | null | transformers | 9,747 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-bzs
* source languages: fi
* target languages: bzs
* OPUS readme: [fi-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-bzs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-bzs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-bzs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.bzs | 27.2 | 0.459 |
|
Helsinki-NLP/opus-mt-fi-el | 214d53d289e8e90c6b8f65a5813460df95778c31 | 2021-09-09T21:47:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-el | 14 | null | transformers | 9,748 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-el
* source languages: fi
* target languages: el
* OPUS readme: [fi-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-el/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-el/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-el/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.el | 27.1 | 0.490 |
|
Helsinki-NLP/opus-mt-fi-is | 321972a9ee7a0fb0af9687d5cd5422d7565ae101 | 2021-09-09T21:48:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"is",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-is | 14 | null | transformers | 9,749 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-is
* source languages: fi
* target languages: is
* OPUS readme: [fi-is](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-is/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-is/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-is/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-is/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.is | 25.2 | 0.452 |
|
Helsinki-NLP/opus-mt-fr-ee | 40ac7fa7a28e1d4a8b07bed9703e0f7911a027e7 | 2021-09-09T21:53:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ee",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ee | 14 | null | transformers | 9,750 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ee
* source languages: fr
* target languages: ee
* OPUS readme: [fr-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ee | 26.3 | 0.466 |
|
Helsinki-NLP/opus-mt-fr-ha | ea7bb19a61b650a3c06bc96fb0fbed28c905ad49 | 2021-09-09T21:54:06.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ha",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ha | 14 | null | transformers | 9,751 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ha
* source languages: fr
* target languages: ha
* OPUS readme: [fr-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ha | 24.4 | 0.447 |
|
Helsinki-NLP/opus-mt-fr-swc | 71d83f817cb7cea1ec60779a48bae15c65632fca | 2021-09-09T21:57:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"swc",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-swc | 14 | null | transformers | 9,752 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-swc
* source languages: fr
* target languages: swc
* OPUS readme: [fr-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.swc | 28.2 | 0.499 |
|
Helsinki-NLP/opus-mt-hu-fi | c46fede8bed71dd3b278d868c078508da856ddd1 | 2021-09-09T22:10:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hu",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hu-fi | 14 | null | transformers | 9,753 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hu-fi
* source languages: hu
* target languages: fi
* OPUS readme: [hu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.fi | 48.2 | 0.700 |
|
Helsinki-NLP/opus-mt-iir-iir | f102b809778f107349c12a9ff0ec33cc03dc3cf5 | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-iir-iir | 14 | null | transformers | 9,754 | ---
language:
- bn
- or
- gu
- mr
- ur
- hi
- ps
- os
- as
- si
- iir
tags:
- translation
license: apache-2.0
---
### iir-iir
* source group: Indo-Iranian languages
* target group: Indo-Iranian languages
* OPUS readme: [iir-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-iir/README.md)
* model: transformer
* source language(s): asm hin mar urd zza
* target language(s): asm hin mar urd zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.asm-hin.asm.hin | 3.5 | 0.202 |
| Tatoeba-test.asm-zza.asm.zza | 12.4 | 0.014 |
| Tatoeba-test.hin-asm.hin.asm | 6.2 | 0.238 |
| Tatoeba-test.hin-mar.hin.mar | 27.0 | 0.560 |
| Tatoeba-test.hin-urd.hin.urd | 21.4 | 0.507 |
| Tatoeba-test.mar-hin.mar.hin | 13.4 | 0.463 |
| Tatoeba-test.multi.multi | 17.7 | 0.460 |
| Tatoeba-test.urd-hin.urd.hin | 13.4 | 0.363 |
| Tatoeba-test.zza-asm.zza.asm | 5.3 | 0.000 |
### System Info:
- hf_name: iir-iir
- source_languages: iir
- target_languages: iir
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-iir/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.test.txt
- src_alpha3: iir
- tgt_alpha3: iir
- short_pair: iir-iir
- chrF2_score: 0.46
- bleu: 17.7
- brevity_penalty: 1.0
- ref_len: 4992.0
- src_name: Indo-Iranian languages
- tgt_name: Indo-Iranian languages
- train_date: 2020-07-27
- src_alpha2: iir
- tgt_alpha2: iir
- prefer_old: False
- long_pair: iir-iir
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-is-de | eddbb688bad72d607a739bfd0c67ffff2db219c1 | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"is",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-is-de | 14 | null | transformers | 9,755 | ---
language:
- is
- de
tags:
- translation
license: apache-2.0
---
### isl-deu
* source group: Icelandic
* target group: German
* OPUS readme: [isl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.deu | 49.2 | 0.661 |
### System Info:
- hf_name: isl-deu
- source_languages: isl
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'de']
- src_constituents: {'isl'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: deu
- short_pair: is-de
- chrF2_score: 0.6609999999999999
- bleu: 49.2
- brevity_penalty: 0.998
- ref_len: 6265.0
- src_name: Icelandic
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: de
- prefer_old: False
- long_pair: isl-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ko-sv | a0ca94665e45cfc246a6cd64c817133d0252e4f4 | 2021-09-10T13:54:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ko",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ko-sv | 14 | null | transformers | 9,756 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ko-sv
* source languages: ko
* target languages: sv
* OPUS readme: [ko-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ko-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ko.sv | 26.5 | 0.468 |
|
Helsinki-NLP/opus-mt-ms-ms | 5319301b7bf40f9bb3f1ca6127b30d2482ed9a58 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ms",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ms-ms | 14 | null | transformers | 9,757 | ---
language:
- ms
tags:
- translation
license: apache-2.0
---
### msa-msa
* source group: Malay (macrolanguage)
* target group: Malay (macrolanguage)
* OPUS readme: [msa-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-msa/README.md)
* model: transformer-align
* source language(s): ind max_Latn min zlm_Latn zsm_Latn
* target language(s): ind max_Latn min zlm_Latn zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa.msa | 18.6 | 0.418 |
### System Info:
- hf_name: msa-msa
- source_languages: msa
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ms']
- src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.test.txt
- src_alpha3: msa
- tgt_alpha3: msa
- short_pair: ms-ms
- chrF2_score: 0.418
- bleu: 18.6
- brevity_penalty: 1.0
- ref_len: 6029.0
- src_name: Malay (macrolanguage)
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: ms
- tgt_alpha2: ms
- prefer_old: False
- long_pair: msa-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-no-nl | ac00f451982b347686785ac445e2d80acc07df1b | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"no",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-no-nl | 14 | null | transformers | 9,758 | ---
language:
- no
- nl
tags:
- translation
license: apache-2.0
---
### nor-nld
* source group: Norwegian
* target group: Dutch
* OPUS readme: [nor-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nld | 40.2 | 0.596 |
### System Info:
- hf_name: nor-nld
- source_languages: nor
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'nl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nld
- short_pair: no-nl
- chrF2_score: 0.596
- bleu: 40.2
- brevity_penalty: 0.9590000000000001
- ref_len: 1535.0
- src_name: Norwegian
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: nl
- prefer_old: False
- long_pair: nor-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-nso-es | c7b2c4a81468174a13eb8052a13810d2433bd5a4 | 2021-09-10T13:59:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nso",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nso-es | 14 | null | transformers | 9,759 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-es
* source languages: nso
* target languages: es
* OPUS readme: [nso-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.es | 29.5 | 0.485 |
|
Helsinki-NLP/opus-mt-sk-fi | aaf7099a8ac22d19361eb8120b174410b8298312 | 2021-09-10T14:03:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sk",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sk-fi | 14 | null | transformers | 9,760 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sk-fi
* source languages: sk
* target languages: fi
* OPUS readme: [sk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.fi | 27.6 | 0.544 |
|
Helsinki-NLP/opus-mt-sv-sl | c9f797a7e609a8c507ce93c0ffa1ebb3d37d8d97 | 2021-09-10T14:09:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"sl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-sl | 14 | null | transformers | 9,761 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-sl
* source languages: sv
* target languages: sl
* OPUS readme: [sv-sl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.sl | 25.1 | 0.487 |
|
Helsinki-NLP/opus-mt-tc-base-gmw-gmw | d688a46d3b297c8509a6f3d449a739d25d7270ee | 2022-06-01T13:10:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"de",
"en",
"fy",
"gmw",
"gos",
"hrx",
"lb",
"nds",
"nl",
"pdc",
"yi",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-gmw-gmw | 14 | null | transformers | 9,762 | ---
language:
- af
- de
- en
- fy
- gmw
- gos
- hrx
- lb
- nds
- nl
- pdc
- yi
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-gmw-gmw
results:
- task:
name: Translation afr-deu
type: translation
args: afr-deu
dataset:
name: flores101-devtest
type: flores_101
args: afr deu devtest
metrics:
- name: BLEU
type: bleu
value: 21.6
- task:
name: Translation afr-eng
type: translation
args: afr-eng
dataset:
name: flores101-devtest
type: flores_101
args: afr eng devtest
metrics:
- name: BLEU
type: bleu
value: 46.8
- task:
name: Translation deu-afr
type: translation
args: deu-afr
dataset:
name: flores101-devtest
type: flores_101
args: deu afr devtest
metrics:
- name: BLEU
type: bleu
value: 21.4
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: flores101-devtest
type: flores_101
args: deu eng devtest
metrics:
- name: BLEU
type: bleu
value: 33.8
- task:
name: Translation eng-afr
type: translation
args: eng-afr
dataset:
name: flores101-devtest
type: flores_101
args: eng afr devtest
metrics:
- name: BLEU
type: bleu
value: 33.8
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: flores101-devtest
type: flores_101
args: eng deu devtest
metrics:
- name: BLEU
type: bleu
value: 29.1
- task:
name: Translation eng-nld
type: translation
args: eng-nld
dataset:
name: flores101-devtest
type: flores_101
args: eng nld devtest
metrics:
- name: BLEU
type: bleu
value: 21.0
- task:
name: Translation nld-eng
type: translation
args: nld-eng
dataset:
name: flores101-devtest
type: flores_101
args: nld eng devtest
metrics:
- name: BLEU
type: bleu
value: 25.6
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 32.2
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 28.8
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: multi30k_test_2017_flickr
type: multi30k-2017_flickr
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 32.7
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: multi30k_test_2017_flickr
type: multi30k-2017_flickr
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 27.6
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: multi30k_test_2017_mscoco
type: multi30k-2017_mscoco
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 25.5
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: multi30k_test_2017_mscoco
type: multi30k-2017_mscoco
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 22.0
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 30.0
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 25.3
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: news-test2008
type: news-test2008
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 23.8
- task:
name: Translation afr-deu
type: translation
args: afr-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: afr-deu
metrics:
- name: BLEU
type: bleu
value: 48.1
- task:
name: Translation afr-eng
type: translation
args: afr-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: afr-eng
metrics:
- name: BLEU
type: bleu
value: 58.8
- task:
name: Translation afr-nld
type: translation
args: afr-nld
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: afr-nld
metrics:
- name: BLEU
type: bleu
value: 54.5
- task:
name: Translation deu-afr
type: translation
args: deu-afr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: deu-afr
metrics:
- name: BLEU
type: bleu
value: 52.4
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 42.1
- task:
name: Translation deu-nld
type: translation
args: deu-nld
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: deu-nld
metrics:
- name: BLEU
type: bleu
value: 48.7
- task:
name: Translation eng-afr
type: translation
args: eng-afr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-afr
metrics:
- name: BLEU
type: bleu
value: 56.5
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 35.9
- task:
name: Translation eng-nld
type: translation
args: eng-nld
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-nld
metrics:
- name: BLEU
type: bleu
value: 48.3
- task:
name: Translation fry-eng
type: translation
args: fry-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fry-eng
metrics:
- name: BLEU
type: bleu
value: 32.5
- task:
name: Translation fry-nld
type: translation
args: fry-nld
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fry-nld
metrics:
- name: BLEU
type: bleu
value: 43.1
- task:
name: Translation hrx-deu
type: translation
args: hrx-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hrx-deu
metrics:
- name: BLEU
type: bleu
value: 24.7
- task:
name: Translation hrx-eng
type: translation
args: hrx-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hrx-eng
metrics:
- name: BLEU
type: bleu
value: 20.4
- task:
name: Translation ltz-deu
type: translation
args: ltz-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ltz-deu
metrics:
- name: BLEU
type: bleu
value: 37.2
- task:
name: Translation ltz-eng
type: translation
args: ltz-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ltz-eng
metrics:
- name: BLEU
type: bleu
value: 32.4
- task:
name: Translation ltz-nld
type: translation
args: ltz-nld
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ltz-nld
metrics:
- name: BLEU
type: bleu
value: 39.3
- task:
name: Translation nds-deu
type: translation
args: nds-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nds-deu
metrics:
- name: BLEU
type: bleu
value: 34.5
- task:
name: Translation nds-eng
type: translation
args: nds-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nds-eng
metrics:
- name: BLEU
type: bleu
value: 29.9
- task:
name: Translation nds-nld
type: translation
args: nds-nld
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nds-nld
metrics:
- name: BLEU
type: bleu
value: 42.3
- task:
name: Translation nld-afr
type: translation
args: nld-afr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nld-afr
metrics:
- name: BLEU
type: bleu
value: 58.8
- task:
name: Translation nld-deu
type: translation
args: nld-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nld-deu
metrics:
- name: BLEU
type: bleu
value: 50.4
- task:
name: Translation nld-eng
type: translation
args: nld-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nld-eng
metrics:
- name: BLEU
type: bleu
value: 53.1
- task:
name: Translation nld-fry
type: translation
args: nld-fry
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nld-fry
metrics:
- name: BLEU
type: bleu
value: 25.1
- task:
name: Translation nld-nds
type: translation
args: nld-nds
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: nld-nds
metrics:
- name: BLEU
type: bleu
value: 21.4
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2009
type: wmt-2009-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 23.4
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2010
type: wmt-2010-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 25.8
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2010
type: wmt-2010-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 20.7
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2011
type: wmt-2011-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 23.7
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2012
type: wmt-2012-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 24.8
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2013
type: wmt-2013-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 27.7
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2013
type: wmt-2013-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 22.5
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2014-deen
type: wmt-2014-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 27.3
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2014-deen
type: wmt-2014-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 22.0
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2015-deen
type: wmt-2015-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 28.6
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2015-ende
type: wmt-2015-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 25.7
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2016-deen
type: wmt-2016-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 33.3
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2016-ende
type: wmt-2016-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 30.0
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2017-deen
type: wmt-2017-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 29.5
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2017-ende
type: wmt-2017-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 24.1
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2018-deen
type: wmt-2018-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 36.1
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2018-ende
type: wmt-2018-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 35.4
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2019-deen
type: wmt-2019-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 32.3
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2019-ende
type: wmt-2019-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 31.2
- task:
name: Translation deu-eng
type: translation
args: deu-eng
dataset:
name: newstest2020-deen
type: wmt-2020-news
args: deu-eng
metrics:
- name: BLEU
type: bleu
value: 32.0
- task:
name: Translation eng-deu
type: translation
args: eng-deu
dataset:
name: newstest2020-ende
type: wmt-2020-news
args: eng-deu
metrics:
- name: BLEU
type: bleu
value: 23.9
---
# opus-mt-tc-base-gmw-gmw
Neural machine translation model for translating from West Germanic languages (gmw) to West Germanic languages (gmw).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT β Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge β Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2021-02-23
* source language(s): afr deu eng fry gos hrx ltz nds nld pdc yid
* target language(s): afr deu eng fry nds nld
* valid target language labels: >>afr<< >>ang_Latn<< >>deu<< >>eng<< >>fry<< >>ltz<< >>nds<< >>nld<< >>sco<< >>yid<<
* model: transformer (base)
* data: opus ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opus-2021-02-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.zip)
* more information released models: [OPUS-MT gmw-gmw README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>afr<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>nld<< You need help.",
">>afr<< I love your son."
]
model_name = "pytorch-models/opus-mt-tc-base-gmw-gmw"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Je hebt hulp nodig.
# Ek is lief vir jou seun.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-gmw-gmw")
print(pipe(>>nld<< You need help.))
# expected output: Je hebt hulp nodig.
```
## Benchmarks
* test set translations: [opus-2021-02-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.test.txt)
* test set scores: [opus-2021-02-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| afr-deu | tatoeba-test-v2021-08-07 | 0.674 | 48.1 | 1583 | 9105 |
| afr-eng | tatoeba-test-v2021-08-07 | 0.728 | 58.8 | 1374 | 9622 |
| afr-nld | tatoeba-test-v2021-08-07 | 0.711 | 54.5 | 1056 | 6710 |
| deu-afr | tatoeba-test-v2021-08-07 | 0.696 | 52.4 | 1583 | 9507 |
| deu-eng | tatoeba-test-v2021-08-07 | 0.609 | 42.1 | 17565 | 149462 |
| deu-nds | tatoeba-test-v2021-08-07 | 0.442 | 18.6 | 9999 | 76137 |
| deu-nld | tatoeba-test-v2021-08-07 | 0.672 | 48.7 | 10218 | 75235 |
| eng-afr | tatoeba-test-v2021-08-07 | 0.735 | 56.5 | 1374 | 10317 |
| eng-deu | tatoeba-test-v2021-08-07 | 0.580 | 35.9 | 17565 | 151568 |
| eng-nds | tatoeba-test-v2021-08-07 | 0.412 | 16.6 | 2500 | 18264 |
| eng-nld | tatoeba-test-v2021-08-07 | 0.663 | 48.3 | 12696 | 91796 |
| fry-eng | tatoeba-test-v2021-08-07 | 0.500 | 32.5 | 220 | 1573 |
| fry-nld | tatoeba-test-v2021-08-07 | 0.633 | 43.1 | 260 | 1854 |
| gos-nld | tatoeba-test-v2021-08-07 | 0.405 | 15.6 | 1852 | 9903 |
| hrx-deu | tatoeba-test-v2021-08-07 | 0.484 | 24.7 | 471 | 2805 |
| hrx-eng | tatoeba-test-v2021-08-07 | 0.362 | 20.4 | 221 | 1235 |
| ltz-deu | tatoeba-test-v2021-08-07 | 0.556 | 37.2 | 347 | 2208 |
| ltz-eng | tatoeba-test-v2021-08-07 | 0.485 | 32.4 | 293 | 1840 |
| ltz-nld | tatoeba-test-v2021-08-07 | 0.534 | 39.3 | 292 | 1685 |
| nds-deu | tatoeba-test-v2021-08-07 | 0.572 | 34.5 | 9999 | 74564 |
| nds-eng | tatoeba-test-v2021-08-07 | 0.493 | 29.9 | 2500 | 17589 |
| nds-nld | tatoeba-test-v2021-08-07 | 0.621 | 42.3 | 1657 | 11490 |
| nld-afr | tatoeba-test-v2021-08-07 | 0.755 | 58.8 | 1056 | 6823 |
| nld-deu | tatoeba-test-v2021-08-07 | 0.686 | 50.4 | 10218 | 74131 |
| nld-eng | tatoeba-test-v2021-08-07 | 0.690 | 53.1 | 12696 | 89978 |
| nld-fry | tatoeba-test-v2021-08-07 | 0.478 | 25.1 | 260 | 1857 |
| nld-nds | tatoeba-test-v2021-08-07 | 0.462 | 21.4 | 1657 | 11711 |
| afr-deu | flores101-devtest | 0.524 | 21.6 | 1012 | 25094 |
| afr-eng | flores101-devtest | 0.693 | 46.8 | 1012 | 24721 |
| afr-nld | flores101-devtest | 0.509 | 18.4 | 1012 | 25467 |
| deu-afr | flores101-devtest | 0.534 | 21.4 | 1012 | 25740 |
| deu-eng | flores101-devtest | 0.616 | 33.8 | 1012 | 24721 |
| deu-nld | flores101-devtest | 0.516 | 19.2 | 1012 | 25467 |
| eng-afr | flores101-devtest | 0.628 | 33.8 | 1012 | 25740 |
| eng-deu | flores101-devtest | 0.581 | 29.1 | 1012 | 25094 |
| eng-nld | flores101-devtest | 0.533 | 21.0 | 1012 | 25467 |
| ltz-afr | flores101-devtest | 0.430 | 12.9 | 1012 | 25740 |
| ltz-deu | flores101-devtest | 0.482 | 17.1 | 1012 | 25094 |
| ltz-eng | flores101-devtest | 0.468 | 18.8 | 1012 | 24721 |
| ltz-nld | flores101-devtest | 0.409 | 10.7 | 1012 | 25467 |
| nld-afr | flores101-devtest | 0.494 | 16.8 | 1012 | 25740 |
| nld-deu | flores101-devtest | 0.501 | 17.9 | 1012 | 25094 |
| nld-eng | flores101-devtest | 0.551 | 25.6 | 1012 | 24721 |
| deu-eng | multi30k_test_2016_flickr | 0.546 | 32.2 | 1000 | 12955 |
| eng-deu | multi30k_test_2016_flickr | 0.582 | 28.8 | 1000 | 12106 |
| deu-eng | multi30k_test_2017_flickr | 0.561 | 32.7 | 1000 | 11374 |
| eng-deu | multi30k_test_2017_flickr | 0.573 | 27.6 | 1000 | 10755 |
| deu-eng | multi30k_test_2017_mscoco | 0.499 | 25.5 | 461 | 5231 |
| eng-deu | multi30k_test_2017_mscoco | 0.514 | 22.0 | 461 | 5158 |
| deu-eng | multi30k_test_2018_flickr | 0.535 | 30.0 | 1071 | 14689 |
| eng-deu | multi30k_test_2018_flickr | 0.547 | 25.3 | 1071 | 13703 |
| deu-eng | newssyscomb2009 | 0.527 | 25.4 | 502 | 11818 |
| eng-deu | newssyscomb2009 | 0.504 | 19.3 | 502 | 11271 |
| deu-eng | news-test2008 | 0.518 | 23.8 | 2051 | 49380 |
| eng-deu | news-test2008 | 0.492 | 19.3 | 2051 | 47447 |
| deu-eng | newstest2009 | 0.516 | 23.4 | 2525 | 65399 |
| eng-deu | newstest2009 | 0.498 | 18.8 | 2525 | 62816 |
| deu-eng | newstest2010 | 0.546 | 25.8 | 2489 | 61711 |
| eng-deu | newstest2010 | 0.508 | 20.7 | 2489 | 61503 |
| deu-eng | newstest2011 | 0.524 | 23.7 | 3003 | 74681 |
| eng-deu | newstest2011 | 0.493 | 19.2 | 3003 | 72981 |
| deu-eng | newstest2012 | 0.532 | 24.8 | 3003 | 72812 |
| eng-deu | newstest2012 | 0.493 | 19.5 | 3003 | 72886 |
| deu-eng | newstest2013 | 0.548 | 27.7 | 3000 | 64505 |
| eng-deu | newstest2013 | 0.517 | 22.5 | 3000 | 63737 |
| deu-eng | newstest2014-deen | 0.548 | 27.3 | 3003 | 67337 |
| eng-deu | newstest2014-deen | 0.532 | 22.0 | 3003 | 62688 |
| deu-eng | newstest2015-deen | 0.553 | 28.6 | 2169 | 46443 |
| eng-deu | newstest2015-ende | 0.544 | 25.7 | 2169 | 44260 |
| deu-eng | newstest2016-deen | 0.596 | 33.3 | 2999 | 64119 |
| eng-deu | newstest2016-ende | 0.580 | 30.0 | 2999 | 62669 |
| deu-eng | newstest2017-deen | 0.561 | 29.5 | 3004 | 64399 |
| eng-deu | newstest2017-ende | 0.535 | 24.1 | 3004 | 61287 |
| deu-eng | newstest2018-deen | 0.610 | 36.1 | 2998 | 67012 |
| eng-deu | newstest2018-ende | 0.613 | 35.4 | 2998 | 64276 |
| deu-eng | newstest2019-deen | 0.582 | 32.3 | 2000 | 39227 |
| eng-deu | newstest2019-ende | 0.583 | 31.2 | 1997 | 48746 |
| deu-eng | newstest2020-deen | 0.604 | 32.0 | 785 | 38220 |
| eng-deu | newstest2020-ende | 0.542 | 23.9 | 1418 | 52383 |
| deu-eng | newstestB2020-deen | 0.598 | 31.2 | 785 | 37696 |
| eng-deu | newstestB2020-ende | 0.532 | 23.3 | 1418 | 53092 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unionβs Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unionβs Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.12.3
* OPUS-MT git hash: e56a06b
* port time: Sun Feb 13 14:42:10 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-ti-en | 610176e10fde21d044d625ba8284095894b92a01 | 2021-09-11T10:48:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ti",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ti-en | 14 | null | transformers | 9,763 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ti-en
* source languages: ti
* target languages: en
* OPUS readme: [ti-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ti-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ti.en | 30.4 | 0.461 |
|
Helsinki-NLP/opus-mt-uk-sv | ccd53b0c75d1dc9367f29121202a0a8df67fae03 | 2021-09-11T10:51:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-sv | 14 | null | transformers | 9,764 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-uk-sv
* source languages: uk
* target languages: sv
* OPUS readme: [uk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.uk.sv | 27.8 | 0.474 |
|
Helsinki-NLP/opus-mt-uk-tr | ed2e1ff5a2fd011e4f337102bc08fea586f5ad0f | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-tr | 14 | null | transformers | 9,765 | ---
language:
- uk
- tr
tags:
- translation
license: apache-2.0
---
### ukr-tur
* source group: Ukrainian
* target group: Turkish
* OPUS readme: [ukr-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.tur | 39.3 | 0.655 |
### System Info:
- hf_name: ukr-tur
- source_languages: ukr
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'tr']
- src_constituents: {'ukr'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: tur
- short_pair: uk-tr
- chrF2_score: 0.655
- bleu: 39.3
- brevity_penalty: 0.934
- ref_len: 11844.0
- src_name: Ukrainian
- tgt_name: Turkish
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: tr
- prefer_old: False
- long_pair: ukr-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-yap-en | 2c11ef76f03b654497b05cac46b09c8ba2821307 | 2021-09-11T10:52:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"yap",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-yap-en | 14 | null | transformers | 9,766 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yap-en
* source languages: yap
* target languages: en
* OPUS readme: [yap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.en | 30.2 | 0.452 |
|
Ilyes/wav2vec2-large-xlsr-53-french_punctuation | 6039d1712c8160c5614feb7b19467e8b52ba426b | 2021-07-05T14:28:11.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Ilyes | null | Ilyes/wav2vec2-large-xlsr-53-french_punctuation | 14 | null | transformers | 9,767 | ---
language: fr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-French_punctuation by Ilyes Rebai
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
args: fr
metrics:
- name: Test WER and CER on text and puctuation prediction
types: [wer, cer]
values: [19.47%, 6.66%]
- name: Test WER and CER on text without punctuation
types: [wer, cer]
values: [17.88%, 6.37%]
---
## Evaluation on Common Voice FR Test
```python
import re
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
model_name = "Ilyes/wav2vec2-large-xlsr-53-french_punctuation"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to('cuda')
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "fr", split="test")
chars_to_ignore_regex = '[\;\:\"\β\%\β\β\οΏ½\β\β\β\β\β\β¦\Β·\Η\Β«\βΉ\Β»\βΊβ\β\\ΚΏ\ΚΎ\β\β\\|\;\:\*\β\β\β\β\_\/\:\Λ\;\=\Β«\Β»\β]'
def normalize_text(text):
text = text.lower().strip()
text = re.sub('Ε', 'oe', text)
text = re.sub('Γ¦', 'ae', text)
text = re.sub("β|Β΄|β²|ΚΌ|β|Κ»|`", "'", text)
text = re.sub("'+ ", " ", text)
text = re.sub(" '+", " ", text)
text = re.sub("'$", " ", text)
text = re.sub("' ", " ", text)
text = re.sub("β|β", "-", text)
text = re.sub(" -", "", text)
text = re.sub("- ", "", text)
text = re.sub(chars_to_ignore_regex, '', text)
return text
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = normalize_text(batch["sentence"])
return batch
ds = ds.map(map_to_array)
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
# remove duplicates
batch["target"] = re.sub('\.+', '.', batch["target"])
batch["target"] = re.sub('\?+', '?', batch["target"])
batch["target"] = re.sub('!+', '!', batch["target"])
batch["target"] = re.sub(',+', ',', batch["target"])
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
## Some results
| Reference | Prediction |
| ------------- | ------------- |
| il vΓ©cut Γ new york et y enseigna une grande partie de sa vie. | il a vΓ©cu Γ new york et y enseigna une grande partie de sa vie. |
| au classement par nations, l'allemagne est la tenante du titre. | au classement der nation l'allemagne est la tenante du titre. |
| voici un petit calcul pour fixer les idΓ©es. | voici un petit calcul pour fixer les idΓ©es. |
| oh! tu dois Γͺtre beau avec | oh! tu dois Γͺtre beau avec. |
| babochet vous le voulez? | baboche, vous le voulez? |
| la commission est, par consΓ©quent, dΓ©favorable Γ cet amendement. | la commission est, par consΓ©quent, dΓ©favorable Γ cet amendement. |
All the references and predictions of the test corpus are already available in this repository.
## Results
text + punctuation
WER=21.47% CER=7.21%
text (without punctuation)
WER=19.71% CER=6.91%
|
KoichiYasuoka/roberta-small-japanese-aozora | fdb370a3c684145416d8b4fd0ee204b3831ad1fc | 2021-11-03T14:44:50.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-small-japanese-aozora | 14 | null | transformers | 9,768 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "ζ₯ζ¬γ«ηγγγ[MASK]γθ¨ͺγγͺγγγ"
---
# roberta-small-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on ιη©ΊζεΊ« texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
```
|
M47Labs/italian_news_classification_headlines | c41a7de361df020eeb2910d5b948e18d01ba2425 | 2021-09-07T15:09:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | M47Labs | null | M47Labs/italian_news_classification_headlines | 14 | null | transformers | 9,769 | Entry not found |
MagicalCat29/model_save_test2 | 25f966a559f8257b47a3a19fd5b03587179aa7b3 | 2022-02-16T14:41:10.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:other",
"autotrain_compatible"
]
| token-classification | false | MagicalCat29 | null | MagicalCat29/model_save_test2 | 14 | null | transformers | 9,770 | ---
license: other
---
|
Manishl7/xlm-roberta-large-language-detection | b6bd9814bb85ca220d98584a8406cd91e2d954dc | 2021-10-20T05:20:44.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Manishl7 | null | Manishl7/xlm-roberta-large-language-detection | 14 | 1 | transformers | 9,771 | Language Detection Model for Nepali, English, Hindi and Spanish
Model fine tuned on xlm-roberta-large |
MariamD/my-t5-qa-legal | c91ded2feb2e50c441517f10d24240a3bbeb0953 | 2021-10-17T13:20:41.000Z | [
"pytorch",
"english",
"dataset:legal dataset",
"question-answering"
]
| question-answering | false | MariamD | null | MariamD/my-t5-qa-legal | 14 | null | null | 9,772 | ---
language: english
datasets:
- legal dataset
pipeline_tag: question-answering
--- |
MaxVortman/bert-base-ukr-eng-rus-uncased | 3b893995f39011b9b6c4d5df96fccd9dc7e17293 | 2021-07-21T12:05:26.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | MaxVortman | null | MaxVortman/bert-base-ukr-eng-rus-uncased | 14 | null | transformers | 9,773 | This repository shares smaller version of bert-base-multilingual-uncased that keeps only Ukrainian, English, and Russian tokens in the vocabulary.
| Model | Num parameters | Size |
| ----------------------------------------- | -------------- | --------- |
| bert-base-multilingual-uncased | 167 million | ~650 MB |
| MaxVortman/bert-base-ukr-eng-rus-uncased | 110 million | ~423 MB | |
Media1129/keyword-tag-model-2000 | cf96612e6d1f4079606d6965f2d287c7de7c12a1 | 2021-08-30T04:35:32.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-2000 | 14 | null | transformers | 9,774 | Entry not found |
MickyMike/0-GPT2SP-jirasoftware | 4faf20db8607a6039692efbd376efa53136d14f5 | 2021-08-19T02:01:12.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/0-GPT2SP-jirasoftware | 14 | null | transformers | 9,775 | Entry not found |
Muennighoff/SGPT-1.3B-weightedmean-nli | 72a2b83739fcff05da3190fb49ead86866464bd2 | 2022-02-21T06:15:32.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-1.3B-weightedmean-nli | 14 | null | sentence-transformers | 9,776 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-1.3B-weightedmean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
RASMUS/wav2vec2-xlsr-1b-et | a72c8ad9eb48cbda84dcf8566c8cd090c144e997 | 2022-03-24T11:55:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"mozilla-foundation/common_voice_8_0",
"audio",
"speech",
"robust-speech-event",
"hf-asr-leaderboard",
"model-index"
]
| automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-1b-et | 14 | null | transformers | 9,777 | ---
language: et
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
tags:
- generated_from_trainer
- mozilla-foundation/common_voice_8_0
- audio
- automatic-speech-recognition
- speech
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: XLS-R 1B Wav2Vec2 Estonian by Rasmus Toivanen
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: et
metrics:
- name: Test WER
type: wer
value: 20.12
- name: Test CER
type: cer
value: 3.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: et
metrics:
- name: Test WER
type: wer
value: 40.77
- name: Test CER
type: cer
value: 12.32
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: et
metrics:
- name: Test WER
type: wer
value: 41.97
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-et-lm-1B
This model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits.
It achieves the following results on the test set:
(Loss reported with last eval step at step 2000/2040 during training)
- Loss: 0.2150
- Wer: 0.2012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
RJ3vans/CMV1spanTagger | db91a06b31dc86cce3548ba0868404044ba459dd | 2021-09-07T13:26:30.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | RJ3vans | null | RJ3vans/CMV1spanTagger | 14 | null | transformers | 9,778 | This model identifies compound verb phrases (including conjoins and coordinators) in an input sentence.
Try the test sentence:
John kicked the ball [and] chased after it.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton. |
Recognai/selectra_small | e45e0de5d6a68200c4b6894d06e16d7b3ef3ace4 | 2021-10-19T15:28:17.000Z | [
"pytorch",
"electra",
"pretraining",
"es",
"dataset:oscar",
"transformers",
"license:apache-2.0"
]
| null | false | Recognai | null | Recognai/selectra_small | 14 | 5 | transformers | 9,779 | ---
language:
- es
thumbnail: "url to a thumbnail used in social sharing"
license: apache-2.0
datasets:
- oscar
---
# SELECTRA: A Spanish ELECTRA
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
We release a `small` and `medium` version with the following configuration:
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
| --- | --- | --- | --- | --- | --- | --- |
| **SELECTRA small** | **12** | **256** | **22M** | **50k** | **512** | **True** |
| [SELECTRA medium](https://huggingface.co/Recognai/selectra_medium) | 12 | 384 | 41M | 50k | 512 | True |
**SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results** (see Metrics section below).
## Usage
From the original [ELECTRA model card](https://huggingface.co/google/electra-small-discriminator): "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")
sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."
inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]
print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando pan rosa con tomate y aceite de oliva .
-3.1 -3.6 -6.9 -3.0 0.19 -4.5 -3.3 -5.1 -5.7 -7.7 -4.4 -4.2
"""
```
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which can be used together with the zero-shot classification pipeline:
- [Zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small)
- [Zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium)
## Metrics
We fine-tune our models on 3 different down-stream tasks:
- [XNLI](https://huggingface.co/datasets/xnli)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
| Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
| --- | --- | --- | --- | --- |
| SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
| SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
| | | | | |
| [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
| [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
| [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
| [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
| [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
| [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
Some details of our fine-tuning runs:
- epochs: 5
- batch-size: 32
- learning rate: 1e-4
- warmup proportion: 0.1
- linear learning rate decay
- layerwise learning rate decay
For all the details, check out our [selectra repo](https://github.com/recognai/selectra).
## Training
We pre-trained our SELECTRA models on the Spanish portion of the [Oscar](https://huggingface.co/datasets/oscar) dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
- steps: 300k
- batch-size: 128
- learning rate: 5e-4
- warmup steps: 10k
- linear learning rate decay
- TPU cores: 8 (v2-8)
For all details, check out our [selectra repo](https://github.com/recognai/selectra).
**Note:** Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
```python
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)
```
## Motivation
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
## Acknowledgment
This research was supported by the Google TPU Research Cloud (TRC) program.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Javier Lopez ([GitHub](https://github.com/javispp))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon)) |
Rexhaif/rubert-base-srl | f6fffd572f1ec2765269cab48be3e3be3c3bd3d3 | 2021-11-10T22:17:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Rexhaif | null | Rexhaif/rubert-base-srl | 14 | null | transformers | 9,780 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: rubert-base-srl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-srl
This model is a fine-tuned version of [./ruBert-base/](https://huggingface.co/./ruBert-base/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2429
- F1: 0.9563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5816 | 1.0 | 57 | 0.3865 | 0.8371 |
| 0.3685 | 2.0 | 114 | 0.1707 | 0.9325 |
| 0.1057 | 3.0 | 171 | 0.0972 | 0.9563 |
| 0.0964 | 4.0 | 228 | 0.1429 | 0.9775 |
| 0.1789 | 5.0 | 285 | 0.2493 | 0.9457 |
| 0.0016 | 6.0 | 342 | 0.1900 | 0.6349 |
| 0.0013 | 7.0 | 399 | 0.2060 | 0.9563 |
| 0.0008 | 8.0 | 456 | 0.2321 | 0.9563 |
| 0.0006 | 9.0 | 513 | 0.2412 | 0.9563 |
| 0.0006 | 10.0 | 570 | 0.2429 | 0.9563 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Rostlab/prot_electra_discriminator_bfd | f62ae0934f54eff38f65ac892c0ea9ac6f2660ac | 2020-12-18T20:10:21.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| null | false | Rostlab | null | Rostlab/prot_electra_discriminator_bfd | 14 | 1 | transformers | 9,781 | Entry not found |
SEBIS/code_trans_t5_base_code_documentation_generation_python_transfer_learning_finetune | d22b5382c3336b622ec0a67a30eb530b837f0f8d | 2021-06-23T04:48:38.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_python_transfer_learning_finetune | 14 | null | transformers | 9,782 | ---
tags:
- summarization
widget:
- text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
---
# CodeTrans model for code documentation generation python
Pretrained model on programming language python using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/python/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask | d74e5428319130225ba1aa862dac021a52d5d766 | 2021-06-23T05:14:09.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask | 14 | null | transformers | 9,783 | ---
tags:
- summarization
widget:
- text: "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
---
# CodeTrans model for source code summarization csharp
Pretrained model on programming language csharp using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_csharp_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/csharp/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 160,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_trans_sv_en | f8cfe534f0bbe4f4cdc27ad3f7cbf29e3d86c7a0 | 2021-06-23T10:08:13.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Swedish English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Swedish English model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_sv_en | 14 | null | transformers | 9,784 |
---
language: Swedish English
tags:
- translation Swedish English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Om rΓ€ttsliga fΓΆrfaranden inleds rΓΆrande omstΓ€ndigheter som ombudsmannen utreder skall han avsluta Γ€rendet."
---
# legal_t5_small_trans_sv_en model
Model on translating legal text from Swedish to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to English.
### How to use
Here is how to use this model to translate legal text from Swedish to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "Om rΓ€ttsliga fΓΆrfaranden inleds rΓΆrande omstΓ€ndigheter som ombudsmannen utreder skall han avsluta Γ€rendet."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_en | 52.025|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
Sakil/distilbert_lazylearner_hatespeech_detection | cc07edc263e27a23453611d11d2d33caeaa5dce2 | 2022-02-20T15:00:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"hate",
"speech",
"license:apache-2.0"
]
| text-classification | false | Sakil | null | Sakil/distilbert_lazylearner_hatespeech_detection | 14 | null | transformers | 9,785 | ---
license: apache-2.0
language: en
tags:
- hate
- speech
widget:
- text: "RT @ShenikaRoberts: The shit you hear about me might be true or it might be faker than the bitch who told it to ya ᙨ"
---
# Dataset Collection:
* The hatespeech dataset is collected from different open sources like Kaggle ,social media like Twitter.
* The dataset has the two classes hatespeech and non hatespeech.
* The class distribution is equal
* Different strategies have been followed during the data gathering phase.
* The dataset is collected from relevant sources.
# distilbert-base-uncased model is fine-tuned for Hate Speech Detection
* The model is fine-tuned on the dataset.
* This model can be used to create the labels for academic purposes or for industrial purposes.
* This model can be used for the inference purpose as well.
# Data Fields:
**label**: 0 - it is a hate speech, 1 - not a hate speech
# Application:
* This model is useful for the detection of hatespeech in the tweets.
* There are numerous situations where we have tweet data but no labels, so this approach can be used to create labels.
* You can fine-tune this model for your particular use cases.
# Model Implementation
# !pip install transformers[sentencepiece]
from transformers import pipeline
model_name="Sakil/distilbert_lazylearner_hatespeech_detection"
classifier = pipeline("text-classification",model=model_name)
classifier("!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. & as a man you should always take the trash out...")
# Github: [Sakil Ansari](https://github.com/Sakil786/hate_speech_detection_pretrained_model) |
SetFit/distilbert-base-uncased__TREC-QC__all-train | 56da4b93982f9fbf65ab88350068a71e2f36ecec | 2022-01-26T20:30:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__TREC-QC__all-train | 14 | null | transformers | 9,786 | Entry not found |
Sora4762/DialoGPT-small-naruto | f6737e28cb5abf95fa60ea2d21a41a446d0e859e | 2022-01-20T17:50:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Sora4762 | null | Sora4762/DialoGPT-small-naruto | 14 | null | transformers | 9,787 | ---
tags:
- conversational
---
# Naruto DialoGPT Model |
Tatyana/rubert_conversational_cased_sentiment | 95945f1fad53246fa0f47f8aff9a299ca9e367f7 | 2021-05-19T22:26:59.000Z | [
"pytorch",
"bert",
"ru",
"dataset:Tatyana/ru_sentiment_dataset",
"transformers",
"sentiment",
"text-classification"
]
| text-classification | false | Tatyana | null | Tatyana/rubert_conversational_cased_sentiment | 14 | null | transformers | 9,788 | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- Tatyana/ru_sentiment_dataset
---
# Keras model with ruBERT conversational embedder for Sentiment Analysis
Russian texts sentiment classification.
Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset)
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
!pip install tensorflow-gpu
!pip install deeppavlov
!python -m deeppavlov install squad_bert
!pip install fasttext
!pip install transformers
!python -m deeppavlov install bert_sentence_embedder
from deeppavlov import build_model
model = build_model(Tatyana/rubert_conversational_cased_sentiment/custom_config.json)
model(["Π‘Π΅Π³ΠΎΠ΄Π½Ρ Ρ
ΠΎΡΠΎΡΠ°Ρ ΠΏΠΎΠ³ΠΎΠ΄Π°", "Π― ΡΡΠ°ΡΡΠ»ΠΈΠ² ΠΏΡΠΎΠ²ΠΎΠ΄ΠΈΡΡ Ρ ΡΠΎΠ±ΠΎΡ Π²ΡΠ΅ΠΌΡ", "ΠΠ½Π΅ Π½ΡΠ°Π²ΠΈΡΡΡ ΡΡΠ° ΠΌΡΠ·ΡΠΊΠ°Π»ΡΠ½Π°Ρ ΠΊΠΎΠΌΠΏΠΎΠ·ΠΈΡΠΈΡ"])
```
|
Tsubasaz/clinical-pubmed-bert-base-128 | 73a95af29ac4204421b4ca5828297cc05d8f373a | 2022-01-27T15:44:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:MIMIC-III",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | Tsubasaz | null | Tsubasaz/clinical-pubmed-bert-base-128 | 14 | null | transformers | 9,789 | ---
language:
- en
license: mit
datasets:
- MIMIC-III
widget:
- text: "Due to shortness of breath, the patient is diagnosed with [MASK], and other respiratory problems."
example_title: "Example 1"
---
# ClinicalPubMedBERT
## Description
A BERT model pre-trained on PubMed abstracts, and continual pre-trained on clinical notes ([MIMIC-III](https://mimic.physionet.org/)). We try combining two domains that have fewer overlaps with general knowledge text corpora: EHRs and biomedical papers. We hope this model can serve better results on clinical-related downstream tasks such as readmissions.
This model is trained on 500000 clinical notes randomly sampled from MIMIC datasets, with 120k steps of training. We also used whole word masking to enhance the coherence of the language model. All notes are chunked into a length of 128 tokens.
Pre-trained model: https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract |
Vaibhavbrkn/mbart-english-hindi | 3c96d2df0c2dc55af2e1b31c97edb3dc62d872df | 2021-06-12T03:41:52.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Vaibhavbrkn | null | Vaibhavbrkn/mbart-english-hindi | 14 | null | transformers | 9,790 | Entry not found |
Yuchen/muril-large-cased-hita-qa | 2062749a61d49a49b1e1af224f6c7f41acd5f80c | 2022-07-23T07:01:06.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | Yuchen | null | Yuchen/muril-large-cased-hita-qa | 14 | null | transformers | 9,791 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
# Question Answering model for Hindi and Tamil
This model is part of the ensemble that ranked 4/943 in the [Hindi and Tamil Question Answering](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition held by Google Research India at Kaggle.
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Yuchen/muril-large-cased-hita-qa")
model = AutoModelForQuestionAnswering.from_pretrained("Yuchen/muril-large-cased-hita-qa")
``` |
Zixtrauce/SelfAwareness | 6183de34e056c91d0010af5fbe1a15e77dbf2d61 | 2022-01-02T04:38:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Zixtrauce | null | Zixtrauce/SelfAwareness | 14 | 1 | transformers | 9,792 | ---
tags:
- conversational
---
#SelfAwareness |
Zohar/distilgpt2-finetuned-restaurant-reviews | 1be19e2016e8d06fd229b3da6f86d12a5676fd0f | 2022-02-16T12:53:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | Zohar | null | Zohar/distilgpt2-finetuned-restaurant-reviews | 14 | null | transformers | 9,793 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-restaurant-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-restaurant-reviews
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a subset of the Yelp restaurant reviews dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6331 | 1.0 | 2536 | 3.5280 |
| 3.5676 | 2.0 | 5072 | 3.4793 |
| 3.5438 | 3.0 | 7608 | 3.4668 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
aapot/wav2vec2-xlsr-1b-finnish-v2 | 40a3206c10975e73f04f947ac9fc259b017793b0 | 2022-03-28T17:49:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"transformers",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | aapot | null | aapot/wav2vec2-xlsr-1b-finnish-v2 | 14 | null | transformers | 9,794 | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 9.73
- name: Test CER
type: cer
value: 1.65
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details π€ |
abdelkader/distilbert-base-uncased-finetuned-emotion | 8cf56c72387bef639411e4850a118339e94f11c4 | 2022-01-04T23:18:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | abdelkader | null | abdelkader/distilbert-base-uncased-finetuned-emotion | 14 | null | transformers | 9,795 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215604730468001
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8007 | 1.0 | 250 | 0.3082 | 0.907 | 0.9045 |
| 0.2438 | 2.0 | 500 | 0.2162 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
addy88/wav2vec2-telugu-stt | b66519593f1d9ac1ac77b4607b190287b6e24566 | 2021-12-19T15:39:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | addy88 | null | addy88/wav2vec2-telugu-stt | 14 | null | transformers | 9,796 | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-telugu-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-telugu-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
ainize/gpt2-simpsons-script-large | 898a39d4cc32a9e6252c93501a344cdfd312a81a | 2021-05-21T12:13:28.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ainize | null | ainize/gpt2-simpsons-script-large | 14 | null | transformers | 9,797 | Entry not found |
aloxatel/W2L | f8a6e3b403f24f0d1e2b3f1abffaa9cb3338d2f2 | 2021-05-20T14:04:23.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | aloxatel | null | aloxatel/W2L | 14 | null | transformers | 9,798 | Entry not found |
annedirkson/BERT_embeddings_ADR_normalization | 90551034d8d7b3510b35c99bafa21ce050768ea3 | 2022-03-02T13:57:14.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | annedirkson | null | annedirkson/BERT_embeddings_ADR_normalization | 14 | null | transformers | 9,799 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.