pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification | transformers | This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "it", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-italian | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"it",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"it"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #it #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is used detecting hatespeech in Italian language. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #it #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
text-classification | transformers | This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "pl", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-polish | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"pl",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"pl"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #pl #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is used detecting hatespeech in Polish language. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #pl #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
text-classification | transformers | This model is used detecting **hatespeech** in **Portuguese language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.716119 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "pt", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-portugese | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"pt",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"pt"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #pt #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is used detecting hatespeech in Portuguese language. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.716119 for a learning rate of 3e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #pt #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
text-classification | transformers | This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| {"language": "es", "license": "apache-2.0"} | Hate-speech-CNERG/dehatebert-mono-spanish | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"es",
"arxiv:2004.06465",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.06465"
] | [
"es"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #es #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| This model is used detecting hatespeech in Spanish language. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this url
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "Deep Learning Models for Multilingual Hate Speech Detection". Accepted at ECML-PKDD 2020.
*Please cite our paper in any published work that uses any of these resources.*
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
| [
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #es #arxiv-2004.06465 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### For more details about our paper\n\nSai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. \"Deep Learning Models for Multilingual Hate Speech Detection\". Accepted at ECML-PKDD 2020.\n\n*Please cite our paper in any published work that uses any of these resources.*\n\n~~~\n@article{aluru2020deep,\n title={Deep Learning Models for Multilingual Hate Speech Detection},\n author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},\n journal={arXiv preprint arXiv:2004.06465},\n year={2020}\n}\n\n~~~"
] |
text-classification | transformers | This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | {"language": "kn", "license": "apache-2.0"} | Hate-speech-CNERG/deoffxlmr-mono-kannada | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"kn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"kn"
] | TAGS
#transformers #pytorch #xlm-roberta #text-classification #kn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is used to detect Offensive Content in Kannada Code-Mixed language. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection".
*Please cite our paper in any published work that uses any of these resources.*
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "URL
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | [
"### For more details about our paper\n\nDebjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. \"Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection\".\n\n*Please cite our paper in any published work that uses any of these resources.*\n~~~\n@inproceedings{saha-etal-2021-hate,\n title = \"Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection\",\n author = \"Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh\",\n booktitle = \"Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages\",\n month = apr,\n year = \"2021\",\n address = \"Kyiv\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"270--276\",\n abstract = \"Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.\",\n}\n~~~"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #kn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nDebjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. \"Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection\".\n\n*Please cite our paper in any published work that uses any of these resources.*\n~~~\n@inproceedings{saha-etal-2021-hate,\n title = \"Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection\",\n author = \"Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh\",\n booktitle = \"Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages\",\n month = apr,\n year = \"2021\",\n address = \"Kyiv\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"270--276\",\n abstract = \"Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.\",\n}\n~~~"
] |
text-classification | transformers | This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | {"language": "ml", "license": "apache-2.0"} | Hate-speech-CNERG/deoffxlmr-mono-malyalam | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ml",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ml"
] | TAGS
#transformers #pytorch #xlm-roberta #text-classification #ml #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is used to detect Offensive Content in Malayalam Code-Mixed language. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection".
*Please cite our paper in any published work that uses any of these resources.*
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "URL
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | [
"### For more details about our paper\n\nDebjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. \"Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection\".\n\n*Please cite our paper in any published work that uses any of these resources.*\n~~~\n@inproceedings{saha-etal-2021-hate,\n title = \"Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection\",\n author = \"Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh\",\n booktitle = \"Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages\",\n month = apr,\n year = \"2021\",\n address = \"Kyiv\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"270--276\",\n abstract = \"Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.\",\n}\n~~~"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #ml #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nDebjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. \"Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection\".\n\n*Please cite our paper in any published work that uses any of these resources.*\n~~~\n@inproceedings{saha-etal-2021-hate,\n title = \"Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection\",\n author = \"Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh\",\n booktitle = \"Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages\",\n month = apr,\n year = \"2021\",\n address = \"Kyiv\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"270--276\",\n abstract = \"Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.\",\n}\n~~~"
] |
text-classification | transformers | This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | {"language": "ta", "license": "apache-2.0"} | Hate-speech-CNERG/deoffxlmr-mono-tamil | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ta"
] | TAGS
#transformers #pytorch #xlm-roberta #text-classification #ta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model is used to detect Offensive Content in Tamil Code-Mixed language. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection".
*Please cite our paper in any published work that uses any of these resources.*
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "URL
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | [
"### For more details about our paper\n\nDebjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. \"Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection\".\n\n*Please cite our paper in any published work that uses any of these resources.*\n~~~\n@inproceedings{saha-etal-2021-hate,\n title = \"Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection\",\n author = \"Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh\",\n booktitle = \"Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages\",\n month = apr,\n year = \"2021\",\n address = \"Kyiv\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"270--276\",\n abstract = \"Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.\",\n}\n~~~"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #ta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### For more details about our paper\n\nDebjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. \"Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection\".\n\n*Please cite our paper in any published work that uses any of these resources.*\n~~~\n@inproceedings{saha-etal-2021-hate,\n title = \"Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection\",\n author = \"Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh\",\n booktitle = \"Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages\",\n month = apr,\n year = \"2021\",\n address = \"Kyiv\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"270--276\",\n abstract = \"Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {''}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.\",\n}\n~~~"
] |
text-generation | transformers |
# Rick Sanchez DialoGPT Model
| {"tags": ["conversational"]} | Havokx/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model
| [
"# Rick Sanchez DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] |
text-generation | null |
# My Awesome Model | {"tags": ["conversational"]} | Heldhy/DialoGPT-small-tony | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
|
# My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#conversational #region-us \n",
"# My Awesome Model"
] |
text-generation | transformers | # My Awesome Model | {"tags": ["conversational"]} | Heldhy/testingAgain | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4568
- Wer: 0.3422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3896 | 4.0 | 500 | 1.1573 | 0.8886 |
| 0.5667 | 8.0 | 1000 | 0.4841 | 0.4470 |
| 0.2126 | 12.0 | 1500 | 0.4201 | 0.3852 |
| 0.1235 | 16.0 | 2000 | 0.4381 | 0.3623 |
| 0.0909 | 20.0 | 2500 | 0.4784 | 0.3748 |
| 0.0611 | 24.0 | 3000 | 0.4390 | 0.3577 |
| 0.0454 | 28.0 | 3500 | 0.4568 | 0.3422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | Heldhy/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4568
* Wer: 0.3422
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text2text-generation | transformers | https://imgs.xkcd.com/comics/reassuring.png
| {} | Hellisotherpeople/T5_Reassuring_Parables | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| URL
| [] | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-classification | fasttext |
# debate2vec
Word-vectors created from a large corpus of competitive debate evidence, and data extraction / processing scripts
#usage
```
import fasttext.util
ft = fasttext.load_model('debate2vec.bin')
ft.get_word_vector('dialectics')
```
# Download Link
Github won't let me store large files in their repos.
* [FastText Vectors Here](https://drive.google.com/file/d/1m-CwPcaIUun4qvg69Hx2gom9dMScuQwS/view?usp=sharing) (~260mb)
# About
Created from all publically available Cross Examination Competitive debate evidence posted by the community on [Open Evidence](https://openev.debatecoaches.org/) (From 2013-2020)
Search through the original evidence by going to [debate.cards](http://debate.cards/)
Stats about this corpus:
* 222485 unique documents larger than 200 words (DebateSum plus some additional debate docs that weren't well-formed enough for inclusion into DebateSum)
* 107555 unique words (showing up more than 10 times in the corpus)
* 101 million total words
Stats about debate2vec vectors:
* 300 dimensions, minimum number of appearances of a word was 10, trained for 100 epochs with lr set to 0.10 using FastText
* lowercased (will release cased)
* No subword information
The corpus includes the following topics
* 2013-2014 Cuba/Mexico/Venezuela Economic Engagement
* 2014-2015 Oceans
* 2015-2016 Domestic Surveillance
* 2016-2017 China
* 2017-2018 Education
* 2018-2019 Immigration
* 2019-2020 Reducing Arms Sales
Other topics that this word vector model will handle extremely well
* Philosophy (Especially Left-Wing / Post-modernist)
* Law
* Government
* Politics
Initial release is of fasttext vectors without subword information. Future releases will include fine-tuned GPT-2 and other high end models as my GPU compute allows.
# Screenshots



| {"library_name": "fasttext", "tags": ["text-classification"], "widget": [{"text": "dialectics", "example_title": "dialectics"}, {"text": "schizoanalysis", "example_title": "schizoanalysis"}, {"text": "praxis", "example_title": "praxis"}, {"text": "topicality", "example_title": "topicality"}]} | Hellisotherpeople/debate2vec | null | [
"fasttext",
"text-classification",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#fasttext #text-classification #region-us
|
# debate2vec
Word-vectors created from a large corpus of competitive debate evidence, and data extraction / processing scripts
#usage
# Download Link
Github won't let me store large files in their repos.
* FastText Vectors Here (~260mb)
# About
Created from all publically available Cross Examination Competitive debate evidence posted by the community on Open Evidence (From 2013-2020)
Search through the original evidence by going to URL
Stats about this corpus:
* 222485 unique documents larger than 200 words (DebateSum plus some additional debate docs that weren't well-formed enough for inclusion into DebateSum)
* 107555 unique words (showing up more than 10 times in the corpus)
* 101 million total words
Stats about debate2vec vectors:
* 300 dimensions, minimum number of appearances of a word was 10, trained for 100 epochs with lr set to 0.10 using FastText
* lowercased (will release cased)
* No subword information
The corpus includes the following topics
* 2013-2014 Cuba/Mexico/Venezuela Economic Engagement
* 2014-2015 Oceans
* 2015-2016 Domestic Surveillance
* 2016-2017 China
* 2017-2018 Education
* 2018-2019 Immigration
* 2019-2020 Reducing Arms Sales
Other topics that this word vector model will handle extremely well
* Philosophy (Especially Left-Wing / Post-modernist)
* Law
* Government
* Politics
Initial release is of fasttext vectors without subword information. Future releases will include fine-tuned GPT-2 and other high end models as my GPU compute allows.
# Screenshots
",
"# About \n\nCreated from all publically available Cross Examination Competitive debate evidence posted by the community on Open Evidence (From 2013-2020)\n\nSearch through the original evidence by going to URL\n\nStats about this corpus: \n* 222485 unique documents larger than 200 words (DebateSum plus some additional debate docs that weren't well-formed enough for inclusion into DebateSum)\n* 107555 unique words (showing up more than 10 times in the corpus)\n* 101 million total words\n\nStats about debate2vec vectors: \n* 300 dimensions, minimum number of appearances of a word was 10, trained for 100 epochs with lr set to 0.10 using FastText\n* lowercased (will release cased)\n* No subword information\n\nThe corpus includes the following topics \n\n* 2013-2014 Cuba/Mexico/Venezuela Economic Engagement\n* 2014-2015 Oceans\n* 2015-2016 Domestic Surveillance\n* 2016-2017 China\n* 2017-2018 Education\n* 2018-2019 Immigration\n* 2019-2020 Reducing Arms Sales\n\nOther topics that this word vector model will handle extremely well\n\n* Philosophy (Especially Left-Wing / Post-modernist)\n* Law\n* Government \n* Politics\n\n\nInitial release is of fasttext vectors without subword information. Future releases will include fine-tuned GPT-2 and other high end models as my GPU compute allows.",
"# Screenshots\n",
"# About \n\nCreated from all publically available Cross Examination Competitive debate evidence posted by the community on Open Evidence (From 2013-2020)\n\nSearch through the original evidence by going to URL\n\nStats about this corpus: \n* 222485 unique documents larger than 200 words (DebateSum plus some additional debate docs that weren't well-formed enough for inclusion into DebateSum)\n* 107555 unique words (showing up more than 10 times in the corpus)\n* 101 million total words\n\nStats about debate2vec vectors: \n* 300 dimensions, minimum number of appearances of a word was 10, trained for 100 epochs with lr set to 0.10 using FastText\n* lowercased (will release cased)\n* No subword information\n\nThe corpus includes the following topics \n\n* 2013-2014 Cuba/Mexico/Venezuela Economic Engagement\n* 2014-2015 Oceans\n* 2015-2016 Domestic Surveillance\n* 2016-2017 China\n* 2017-2018 Education\n* 2018-2019 Immigration\n* 2019-2020 Reducing Arms Sales\n\nOther topics that this word vector model will handle extremely well\n\n* Philosophy (Especially Left-Wing / Post-modernist)\n* Law\n* Government \n* Politics\n\n\nInitial release is of fasttext vectors without subword information. Future releases will include fine-tuned GPT-2 and other high end models as my GPU compute allows.",
"# Screenshots\n
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.sv | 48.1 | 0.663 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-NORTH\_EU-NORTH\_EU
* source languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv
* target languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv
* OPUS readme: de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 48.1, chr-F: 0.663
| [
"### opus-mt-NORTH\\_EU-NORTH\\_EU\n\n\n* source languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv\n* target languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv\n* OPUS readme: de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.663"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-NORTH\\_EU-NORTH\\_EU\n\n\n* source languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv\n* target languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv\n* OPUS readme: de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.663"
] |
translation | transformers |
### opus-mt-ROMANCE-en
* source languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la
* target languages: en
* OPUS readme: [fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-01.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.zip)
* test set translations: [opus-2020-04-01.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.test.txt)
* test set scores: [opus-2020-04-01.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.en | 62.2 | 0.750 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ROMANCE-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"roa",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #roa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ROMANCE-en
* source languages: fr,fr\_BE,fr\_CA,fr\_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es\_AR,es\_CL,es\_CO,es\_CR,es\_DO,es\_EC,es\_ES,es\_GT,es\_HN,es\_MX,es\_NI,es\_PA,es\_PE,es\_PR,es\_SV,es\_UY,es\_VE,pt,pt\_br,pt\_BR,pt\_PT,gl,lad,an,mwl,it,it\_IT,co,nap,scn,vec,sc,ro,la
* target languages: en
* OPUS readme: fr+fr\_BE+fr\_CA+fr\_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es\_AR+es\_CL+es\_CO+es\_CR+es\_DO+es\_EC+es\_ES+es\_GT+es\_HN+es\_MX+es\_NI+es\_PA+es\_PE+es\_PR+es\_SV+es\_UY+es\_VE+pt+pt\_br+pt\_BR+pt\_PT+gl+lad+an+mwl+it+it\_IT+co+nap+scn+vec+sc+ro+la-en
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 62.2, chr-F: 0.750
| [
"### opus-mt-ROMANCE-en\n\n\n* source languages: fr,fr\\_BE,fr\\_CA,fr\\_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es\\_AR,es\\_CL,es\\_CO,es\\_CR,es\\_DO,es\\_EC,es\\_ES,es\\_GT,es\\_HN,es\\_MX,es\\_NI,es\\_PA,es\\_PE,es\\_PR,es\\_SV,es\\_UY,es\\_VE,pt,pt\\_br,pt\\_BR,pt\\_PT,gl,lad,an,mwl,it,it\\_IT,co,nap,scn,vec,sc,ro,la\n* target languages: en\n* OPUS readme: fr+fr\\_BE+fr\\_CA+fr\\_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es\\_AR+es\\_CL+es\\_CO+es\\_CR+es\\_DO+es\\_EC+es\\_ES+es\\_GT+es\\_HN+es\\_MX+es\\_NI+es\\_PA+es\\_PE+es\\_PR+es\\_SV+es\\_UY+es\\_VE+pt+pt\\_br+pt\\_BR+pt\\_PT+gl+lad+an+mwl+it+it\\_IT+co+nap+scn+vec+sc+ro+la-en\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 62.2, chr-F: 0.750"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #roa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ROMANCE-en\n\n\n* source languages: fr,fr\\_BE,fr\\_CA,fr\\_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es\\_AR,es\\_CL,es\\_CO,es\\_CR,es\\_DO,es\\_EC,es\\_ES,es\\_GT,es\\_HN,es\\_MX,es\\_NI,es\\_PA,es\\_PE,es\\_PR,es\\_SV,es\\_UY,es\\_VE,pt,pt\\_br,pt\\_BR,pt\\_PT,gl,lad,an,mwl,it,it\\_IT,co,nap,scn,vec,sc,ro,la\n* target languages: en\n* OPUS readme: fr+fr\\_BE+fr\\_CA+fr\\_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es\\_AR+es\\_CL+es\\_CO+es\\_CR+es\\_DO+es\\_EC+es\\_ES+es\\_GT+es\\_HN+es\\_MX+es\\_NI+es\\_PA+es\\_PE+es\\_PR+es\\_SV+es\\_UY+es\\_VE+pt+pt\\_br+pt\\_BR+pt\\_PT+gl+lad+an+mwl+it+it\\_IT+co+nap+scn+vec+sc+ro+la-en\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 62.2, chr-F: 0.750"
] |
translation | transformers |
### opus-mt-SCANDINAVIA-SCANDINAVIA
* source languages: da,fo,is,no,nb,nn,sv
* target languages: da,fo,is,no,nb,nn,sv
* OPUS readme: [da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.sv | 69.2 | 0.811 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"scandinavia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #scandinavia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-SCANDINAVIA-SCANDINAVIA
* source languages: da,fo,is,no,nb,nn,sv
* target languages: da,fo,is,no,nb,nn,sv
* OPUS readme: da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 69.2, chr-F: 0.811
| [
"### opus-mt-SCANDINAVIA-SCANDINAVIA\n\n\n* source languages: da,fo,is,no,nb,nn,sv\n* target languages: da,fo,is,no,nb,nn,sv\n* OPUS readme: da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 69.2, chr-F: 0.811"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #scandinavia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-SCANDINAVIA-SCANDINAVIA\n\n\n* source languages: da,fo,is,no,nb,nn,sv\n* target languages: da,fo,is,no,nb,nn,sv\n* OPUS readme: da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 69.2, chr-F: 0.811"
] |
translation | transformers |
### aav-eng
* source group: Austro-Asiatic languages
* target group: English
* OPUS readme: [aav-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md)
* model: transformer
* source language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hoc-eng.hoc.eng | 0.3 | 0.095 |
| Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.115 |
| Tatoeba-test.khm-eng.khm.eng | 8.9 | 0.271 |
| Tatoeba-test.mnw-eng.mnw.eng | 0.8 | 0.118 |
| Tatoeba-test.multi.eng | 24.8 | 0.391 |
| Tatoeba-test.vie-eng.vie.eng | 38.7 | 0.567 |
### System Info:
- hf_name: aav-eng
- source_languages: aav
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'km', 'aav', 'en']
- src_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt
- src_alpha3: aav
- tgt_alpha3: eng
- short_pair: aav-en
- chrF2_score: 0.391
- bleu: 24.8
- brevity_penalty: 0.968
- ref_len: 36693.0
- src_name: Austro-Asiatic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: aav
- tgt_alpha2: en
- prefer_old: False
- long_pair: aav-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "km", "aav", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-aav-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"km",
"aav",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"vi",
"km",
"aav",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #vi #km #aav #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### aav-eng
* source group: Austro-Asiatic languages
* target group: English
* OPUS readme: aav-eng
* model: transformer
* source language(s): hoc hoc\_Latn kha khm khm\_Latn mnw vie vie\_Hani
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 0.3, chr-F: 0.095
testset: URL, BLEU: 1.0, chr-F: 0.115
testset: URL, BLEU: 8.9, chr-F: 0.271
testset: URL, BLEU: 0.8, chr-F: 0.118
testset: URL, BLEU: 24.8, chr-F: 0.391
testset: URL, BLEU: 38.7, chr-F: 0.567
### System Info:
* hf\_name: aav-eng
* source\_languages: aav
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['vi', 'km', 'aav', 'en']
* src\_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie\_Hani', 'khm\_Latn', 'hoc\_Latn', 'hoc'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: aav
* tgt\_alpha3: eng
* short\_pair: aav-en
* chrF2\_score: 0.391
* bleu: 24.8
* brevity\_penalty: 0.968
* ref\_len: 36693.0
* src\_name: Austro-Asiatic languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: aav
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: aav-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### aav-eng\n\n\n* source group: Austro-Asiatic languages\n* target group: English\n* OPUS readme: aav-eng\n* model: transformer\n* source language(s): hoc hoc\\_Latn kha khm khm\\_Latn mnw vie vie\\_Hani\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.3, chr-F: 0.095\ntestset: URL, BLEU: 1.0, chr-F: 0.115\ntestset: URL, BLEU: 8.9, chr-F: 0.271\ntestset: URL, BLEU: 0.8, chr-F: 0.118\ntestset: URL, BLEU: 24.8, chr-F: 0.391\ntestset: URL, BLEU: 38.7, chr-F: 0.567",
"### System Info:\n\n\n* hf\\_name: aav-eng\n* source\\_languages: aav\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'km', 'aav', 'en']\n* src\\_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie\\_Hani', 'khm\\_Latn', 'hoc\\_Latn', 'hoc'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aav\n* tgt\\_alpha3: eng\n* short\\_pair: aav-en\n* chrF2\\_score: 0.391\n* bleu: 24.8\n* brevity\\_penalty: 0.968\n* ref\\_len: 36693.0\n* src\\_name: Austro-Asiatic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: aav\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: aav-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #km #aav #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### aav-eng\n\n\n* source group: Austro-Asiatic languages\n* target group: English\n* OPUS readme: aav-eng\n* model: transformer\n* source language(s): hoc hoc\\_Latn kha khm khm\\_Latn mnw vie vie\\_Hani\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.3, chr-F: 0.095\ntestset: URL, BLEU: 1.0, chr-F: 0.115\ntestset: URL, BLEU: 8.9, chr-F: 0.271\ntestset: URL, BLEU: 0.8, chr-F: 0.118\ntestset: URL, BLEU: 24.8, chr-F: 0.391\ntestset: URL, BLEU: 38.7, chr-F: 0.567",
"### System Info:\n\n\n* hf\\_name: aav-eng\n* source\\_languages: aav\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'km', 'aav', 'en']\n* src\\_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie\\_Hani', 'khm\\_Latn', 'hoc\\_Latn', 'hoc'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aav\n* tgt\\_alpha3: eng\n* short\\_pair: aav-en\n* chrF2\\_score: 0.391\n* bleu: 24.8\n* brevity\\_penalty: 0.968\n* ref\\_len: 36693.0\n* src\\_name: Austro-Asiatic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: aav\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: aav-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-aed-es
* source languages: aed
* target languages: es
* OPUS readme: [aed-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/aed-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.aed.es | 89.1 | 0.915 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-aed-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"aed",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #aed #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-aed-es
* source languages: aed
* target languages: es
* OPUS readme: aed-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 89.1, chr-F: 0.915
| [
"### opus-mt-aed-es\n\n\n* source languages: aed\n* target languages: es\n* OPUS readme: aed-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.1, chr-F: 0.915"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #aed #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-aed-es\n\n\n* source languages: aed\n* target languages: es\n* OPUS readme: aed-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.1, chr-F: 0.915"
] |
translation | transformers |
### opus-mt-af-de
* source languages: af
* target languages: de
* OPUS readme: [af-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-19.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.zip)
* test set translations: [opus-2020-01-19.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.test.txt)
* test set scores: [opus-2020-01-19.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.af.de | 48.6 | 0.681 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-af-de
* source languages: af
* target languages: de
* OPUS readme: af-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 48.6, chr-F: 0.681
| [
"### opus-mt-af-de\n\n\n* source languages: af\n* target languages: de\n* OPUS readme: af-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.6, chr-F: 0.681"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-af-de\n\n\n* source languages: af\n* target languages: de\n* OPUS readme: af-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.6, chr-F: 0.681"
] |
translation | transformers |
### opus-mt-af-en
* source languages: af
* target languages: en
* OPUS readme: [af-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.af.en | 60.8 | 0.736 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-af-en
* source languages: af
* target languages: en
* OPUS readme: af-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 60.8, chr-F: 0.736
| [
"### opus-mt-af-en\n\n\n* source languages: af\n* target languages: en\n* OPUS readme: af-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.8, chr-F: 0.736"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-af-en\n\n\n* source languages: af\n* target languages: en\n* OPUS readme: af-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.8, chr-F: 0.736"
] |
translation | transformers |
### afr-epo
* source group: Afrikaans
* target group: Esperanto
* OPUS readme: [afr-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.epo | 18.3 | 0.411 |
### System Info:
- hf_name: afr-epo
- source_languages: afr
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'eo']
- src_constituents: {'afr'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt
- src_alpha3: afr
- tgt_alpha3: epo
- short_pair: af-eo
- chrF2_score: 0.41100000000000003
- bleu: 18.3
- brevity_penalty: 0.995
- ref_len: 7517.0
- src_name: Afrikaans
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: af
- tgt_alpha2: eo
- prefer_old: False
- long_pair: afr-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["af", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"af",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### afr-epo
* source group: Afrikaans
* target group: Esperanto
* OPUS readme: afr-epo
* model: transformer-align
* source language(s): afr
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.3, chr-F: 0.411
### System Info:
* hf\_name: afr-epo
* source\_languages: afr
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['af', 'eo']
* src\_constituents: {'afr'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: afr
* tgt\_alpha3: epo
* short\_pair: af-eo
* chrF2\_score: 0.41100000000000003
* bleu: 18.3
* brevity\_penalty: 0.995
* ref\_len: 7517.0
* src\_name: Afrikaans
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: af
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: afr-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### afr-epo\n\n\n* source group: Afrikaans\n* target group: Esperanto\n* OPUS readme: afr-epo\n* model: transformer-align\n* source language(s): afr\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.411",
"### System Info:\n\n\n* hf\\_name: afr-epo\n* source\\_languages: afr\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'eo']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: epo\n* short\\_pair: af-eo\n* chrF2\\_score: 0.41100000000000003\n* bleu: 18.3\n* brevity\\_penalty: 0.995\n* ref\\_len: 7517.0\n* src\\_name: Afrikaans\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: af\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: afr-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### afr-epo\n\n\n* source group: Afrikaans\n* target group: Esperanto\n* OPUS readme: afr-epo\n* model: transformer-align\n* source language(s): afr\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.411",
"### System Info:\n\n\n* hf\\_name: afr-epo\n* source\\_languages: afr\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'eo']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: epo\n* short\\_pair: af-eo\n* chrF2\\_score: 0.41100000000000003\n* bleu: 18.3\n* brevity\\_penalty: 0.995\n* ref\\_len: 7517.0\n* src\\_name: Afrikaans\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: af\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: afr-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### afr-spa
* source group: Afrikaans
* target group: Spanish
* OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.spa | 49.9 | 0.680 |
### System Info:
- hf_name: afr-spa
- source_languages: afr
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'es']
- src_constituents: {'afr'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: spa
- short_pair: af-es
- chrF2_score: 0.68
- bleu: 49.9
- brevity_penalty: 1.0
- ref_len: 2783.0
- src_name: Afrikaans
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: es
- prefer_old: False
- long_pair: afr-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["af", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"af",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### afr-spa
* source group: Afrikaans
* target group: Spanish
* OPUS readme: afr-spa
* model: transformer-align
* source language(s): afr
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.9, chr-F: 0.680
### System Info:
* hf\_name: afr-spa
* source\_languages: afr
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['af', 'es']
* src\_constituents: {'afr'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: afr
* tgt\_alpha3: spa
* short\_pair: af-es
* chrF2\_score: 0.68
* bleu: 49.9
* brevity\_penalty: 1.0
* ref\_len: 2783.0
* src\_name: Afrikaans
* tgt\_name: Spanish
* train\_date: 2020-06-17
* src\_alpha2: af
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: afr-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### afr-spa\n\n\n* source group: Afrikaans\n* target group: Spanish\n* OPUS readme: afr-spa\n* model: transformer-align\n* source language(s): afr\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.9, chr-F: 0.680",
"### System Info:\n\n\n* hf\\_name: afr-spa\n* source\\_languages: afr\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'es']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: spa\n* short\\_pair: af-es\n* chrF2\\_score: 0.68\n* bleu: 49.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 2783.0\n* src\\_name: Afrikaans\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: af\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: afr-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### afr-spa\n\n\n* source group: Afrikaans\n* target group: Spanish\n* OPUS readme: afr-spa\n* model: transformer-align\n* source language(s): afr\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.9, chr-F: 0.680",
"### System Info:\n\n\n* hf\\_name: afr-spa\n* source\\_languages: afr\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'es']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: spa\n* short\\_pair: af-es\n* chrF2\\_score: 0.68\n* bleu: 49.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 2783.0\n* src\\_name: Afrikaans\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: af\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: afr-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-af-fi
* source languages: af
* target languages: fi
* OPUS readme: [af-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.fi | 32.3 | 0.576 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-af-fi
* source languages: af
* target languages: fi
* OPUS readme: af-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.3, chr-F: 0.576
| [
"### opus-mt-af-fi\n\n\n* source languages: af\n* target languages: fi\n* OPUS readme: af-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.576"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-af-fi\n\n\n* source languages: af\n* target languages: fi\n* OPUS readme: af-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.576"
] |
translation | transformers |
### opus-mt-af-fr
* source languages: af
* target languages: fr
* OPUS readme: [af-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.fr | 35.3 | 0.543 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-af-fr
* source languages: af
* target languages: fr
* OPUS readme: af-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.3, chr-F: 0.543
| [
"### opus-mt-af-fr\n\n\n* source languages: af\n* target languages: fr\n* OPUS readme: af-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.543"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-af-fr\n\n\n* source languages: af\n* target languages: fr\n* OPUS readme: af-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.543"
] |
translation | transformers |
### afr-nld
* source group: Afrikaans
* target group: Dutch
* OPUS readme: [afr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.nld | 55.2 | 0.715 |
### System Info:
- hf_name: afr-nld
- source_languages: afr
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'nl']
- src_constituents: {'afr'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: nld
- short_pair: af-nl
- chrF2_score: 0.715
- bleu: 55.2
- brevity_penalty: 0.995
- ref_len: 6710.0
- src_name: Afrikaans
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: nl
- prefer_old: False
- long_pair: afr-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["af", "nl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-nl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"af",
"nl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### afr-nld
* source group: Afrikaans
* target group: Dutch
* OPUS readme: afr-nld
* model: transformer-align
* source language(s): afr
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 55.2, chr-F: 0.715
### System Info:
* hf\_name: afr-nld
* source\_languages: afr
* target\_languages: nld
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['af', 'nl']
* src\_constituents: {'afr'}
* tgt\_constituents: {'nld'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: afr
* tgt\_alpha3: nld
* short\_pair: af-nl
* chrF2\_score: 0.715
* bleu: 55.2
* brevity\_penalty: 0.995
* ref\_len: 6710.0
* src\_name: Afrikaans
* tgt\_name: Dutch
* train\_date: 2020-06-17
* src\_alpha2: af
* tgt\_alpha2: nl
* prefer\_old: False
* long\_pair: afr-nld
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### afr-nld\n\n\n* source group: Afrikaans\n* target group: Dutch\n* OPUS readme: afr-nld\n* model: transformer-align\n* source language(s): afr\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.2, chr-F: 0.715",
"### System Info:\n\n\n* hf\\_name: afr-nld\n* source\\_languages: afr\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'nl']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: nld\n* short\\_pair: af-nl\n* chrF2\\_score: 0.715\n* bleu: 55.2\n* brevity\\_penalty: 0.995\n* ref\\_len: 6710.0\n* src\\_name: Afrikaans\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: af\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: afr-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### afr-nld\n\n\n* source group: Afrikaans\n* target group: Dutch\n* OPUS readme: afr-nld\n* model: transformer-align\n* source language(s): afr\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.2, chr-F: 0.715",
"### System Info:\n\n\n* hf\\_name: afr-nld\n* source\\_languages: afr\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'nl']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: nld\n* short\\_pair: af-nl\n* chrF2\\_score: 0.715\n* bleu: 55.2\n* brevity\\_penalty: 0.995\n* ref\\_len: 6710.0\n* src\\_name: Afrikaans\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: af\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: afr-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### afr-rus
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: afr-rus
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: {'afr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- short_pair: af-ru
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213.0
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- long_pair: afr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["af", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"af",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### afr-rus
* source group: Afrikaans
* target group: Russian
* OPUS readme: afr-rus
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.2, chr-F: 0.580
### System Info:
* hf\_name: afr-rus
* source\_languages: afr
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['af', 'ru']
* src\_constituents: {'afr'}
* tgt\_constituents: {'rus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: afr
* tgt\_alpha3: rus
* short\_pair: af-ru
* chrF2\_score: 0.58
* bleu: 38.2
* brevity\_penalty: 0.992
* ref\_len: 1213.0
* src\_name: Afrikaans
* tgt\_name: Russian
* train\_date: 2020-06-17
* src\_alpha2: af
* tgt\_alpha2: ru
* prefer\_old: False
* long\_pair: afr-rus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### afr-rus\n\n\n* source group: Afrikaans\n* target group: Russian\n* OPUS readme: afr-rus\n* model: transformer-align\n* source language(s): afr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.580",
"### System Info:\n\n\n* hf\\_name: afr-rus\n* source\\_languages: afr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'ru']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: rus\n* short\\_pair: af-ru\n* chrF2\\_score: 0.58\n* bleu: 38.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 1213.0\n* src\\_name: Afrikaans\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: af\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: afr-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### afr-rus\n\n\n* source group: Afrikaans\n* target group: Russian\n* OPUS readme: afr-rus\n* model: transformer-align\n* source language(s): afr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.580",
"### System Info:\n\n\n* hf\\_name: afr-rus\n* source\\_languages: afr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'ru']\n* src\\_constituents: {'afr'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: rus\n* short\\_pair: af-ru\n* chrF2\\_score: 0.58\n* bleu: 38.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 1213.0\n* src\\_name: Afrikaans\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: af\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: afr-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-af-sv
* source languages: af
* target languages: sv
* OPUS readme: [af-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.sv | 40.4 | 0.599 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-af-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #af #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-af-sv
* source languages: af
* target languages: sv
* OPUS readme: af-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.4, chr-F: 0.599
| [
"### opus-mt-af-sv\n\n\n* source languages: af\n* target languages: sv\n* OPUS readme: af-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.4, chr-F: 0.599"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #af #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-af-sv\n\n\n* source languages: af\n* target languages: sv\n* OPUS readme: af-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.4, chr-F: 0.599"
] |
translation | transformers |
### afa-afa
* source group: Afro-Asiatic languages
* target group: Afro-Asiatic languages
* OPUS readme: [afa-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md)
* model: transformer
* source language(s): apc ara arq arz heb kab mlt shy_Latn thv
* target language(s): apc ara arq arz heb kab mlt shy_Latn thv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara-ara.ara.ara | 4.3 | 0.148 |
| Tatoeba-test.ara-heb.ara.heb | 31.9 | 0.525 |
| Tatoeba-test.ara-kab.ara.kab | 0.3 | 0.120 |
| Tatoeba-test.ara-mlt.ara.mlt | 14.0 | 0.428 |
| Tatoeba-test.ara-shy.ara.shy | 1.3 | 0.050 |
| Tatoeba-test.heb-ara.heb.ara | 17.0 | 0.464 |
| Tatoeba-test.heb-kab.heb.kab | 1.9 | 0.104 |
| Tatoeba-test.kab-ara.kab.ara | 0.3 | 0.044 |
| Tatoeba-test.kab-heb.kab.heb | 5.1 | 0.099 |
| Tatoeba-test.kab-shy.kab.shy | 2.2 | 0.009 |
| Tatoeba-test.kab-tmh.kab.tmh | 10.7 | 0.007 |
| Tatoeba-test.mlt-ara.mlt.ara | 29.1 | 0.498 |
| Tatoeba-test.multi.multi | 20.8 | 0.434 |
| Tatoeba-test.shy-ara.shy.ara | 1.2 | 0.053 |
| Tatoeba-test.shy-kab.shy.kab | 2.0 | 0.134 |
| Tatoeba-test.tmh-kab.tmh.kab | 0.0 | 0.047 |
### System Info:
- hf_name: afa-afa
- source_languages: afa
- target_languages: afa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
- src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt
- src_alpha3: afa
- tgt_alpha3: afa
- short_pair: afa-afa
- chrF2_score: 0.434
- bleu: 20.8
- brevity_penalty: 1.0
- ref_len: 15215.0
- src_name: Afro-Asiatic languages
- tgt_name: Afro-Asiatic languages
- train_date: 2020-07-26
- src_alpha2: afa
- tgt_alpha2: afa
- prefer_old: False
- long_pair: afa-afa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["so", "ti", "am", "he", "mt", "ar", "afa"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-afa-afa | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #so #ti #am #he #mt #ar #afa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### afa-afa
* source group: Afro-Asiatic languages
* target group: Afro-Asiatic languages
* OPUS readme: afa-afa
* model: transformer
* source language(s): apc ara arq arz heb kab mlt shy\_Latn thv
* target language(s): apc ara arq arz heb kab mlt shy\_Latn thv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 4.3, chr-F: 0.148
testset: URL, BLEU: 31.9, chr-F: 0.525
testset: URL, BLEU: 0.3, chr-F: 0.120
testset: URL, BLEU: 14.0, chr-F: 0.428
testset: URL, BLEU: 1.3, chr-F: 0.050
testset: URL, BLEU: 17.0, chr-F: 0.464
testset: URL, BLEU: 1.9, chr-F: 0.104
testset: URL, BLEU: 0.3, chr-F: 0.044
testset: URL, BLEU: 5.1, chr-F: 0.099
testset: URL, BLEU: 2.2, chr-F: 0.009
testset: URL, BLEU: 10.7, chr-F: 0.007
testset: URL, BLEU: 29.1, chr-F: 0.498
testset: URL, BLEU: 20.8, chr-F: 0.434
testset: URL, BLEU: 1.2, chr-F: 0.053
testset: URL, BLEU: 2.0, chr-F: 0.134
testset: URL, BLEU: 0.0, chr-F: 0.047
### System Info:
* hf\_name: afa-afa
* source\_languages: afa
* target\_languages: afa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
* src\_constituents: {'som', 'rif\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\_Latn', 'acm', 'ary'}
* tgt\_constituents: {'som', 'rif\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\_Latn', 'acm', 'ary'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: afa
* tgt\_alpha3: afa
* short\_pair: afa-afa
* chrF2\_score: 0.434
* bleu: 20.8
* brevity\_penalty: 1.0
* ref\_len: 15215.0
* src\_name: Afro-Asiatic languages
* tgt\_name: Afro-Asiatic languages
* train\_date: 2020-07-26
* src\_alpha2: afa
* tgt\_alpha2: afa
* prefer\_old: False
* long\_pair: afa-afa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### afa-afa\n\n\n* source group: Afro-Asiatic languages\n* target group: Afro-Asiatic languages\n* OPUS readme: afa-afa\n* model: transformer\n* source language(s): apc ara arq arz heb kab mlt shy\\_Latn thv\n* target language(s): apc ara arq arz heb kab mlt shy\\_Latn thv\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.3, chr-F: 0.148\ntestset: URL, BLEU: 31.9, chr-F: 0.525\ntestset: URL, BLEU: 0.3, chr-F: 0.120\ntestset: URL, BLEU: 14.0, chr-F: 0.428\ntestset: URL, BLEU: 1.3, chr-F: 0.050\ntestset: URL, BLEU: 17.0, chr-F: 0.464\ntestset: URL, BLEU: 1.9, chr-F: 0.104\ntestset: URL, BLEU: 0.3, chr-F: 0.044\ntestset: URL, BLEU: 5.1, chr-F: 0.099\ntestset: URL, BLEU: 2.2, chr-F: 0.009\ntestset: URL, BLEU: 10.7, chr-F: 0.007\ntestset: URL, BLEU: 29.1, chr-F: 0.498\ntestset: URL, BLEU: 20.8, chr-F: 0.434\ntestset: URL, BLEU: 1.2, chr-F: 0.053\ntestset: URL, BLEU: 2.0, chr-F: 0.134\ntestset: URL, BLEU: 0.0, chr-F: 0.047",
"### System Info:\n\n\n* hf\\_name: afa-afa\n* source\\_languages: afa\n* target\\_languages: afa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']\n* src\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* tgt\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afa\n* tgt\\_alpha3: afa\n* short\\_pair: afa-afa\n* chrF2\\_score: 0.434\n* bleu: 20.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 15215.0\n* src\\_name: Afro-Asiatic languages\n* tgt\\_name: Afro-Asiatic languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: afa\n* tgt\\_alpha2: afa\n* prefer\\_old: False\n* long\\_pair: afa-afa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #so #ti #am #he #mt #ar #afa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### afa-afa\n\n\n* source group: Afro-Asiatic languages\n* target group: Afro-Asiatic languages\n* OPUS readme: afa-afa\n* model: transformer\n* source language(s): apc ara arq arz heb kab mlt shy\\_Latn thv\n* target language(s): apc ara arq arz heb kab mlt shy\\_Latn thv\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.3, chr-F: 0.148\ntestset: URL, BLEU: 31.9, chr-F: 0.525\ntestset: URL, BLEU: 0.3, chr-F: 0.120\ntestset: URL, BLEU: 14.0, chr-F: 0.428\ntestset: URL, BLEU: 1.3, chr-F: 0.050\ntestset: URL, BLEU: 17.0, chr-F: 0.464\ntestset: URL, BLEU: 1.9, chr-F: 0.104\ntestset: URL, BLEU: 0.3, chr-F: 0.044\ntestset: URL, BLEU: 5.1, chr-F: 0.099\ntestset: URL, BLEU: 2.2, chr-F: 0.009\ntestset: URL, BLEU: 10.7, chr-F: 0.007\ntestset: URL, BLEU: 29.1, chr-F: 0.498\ntestset: URL, BLEU: 20.8, chr-F: 0.434\ntestset: URL, BLEU: 1.2, chr-F: 0.053\ntestset: URL, BLEU: 2.0, chr-F: 0.134\ntestset: URL, BLEU: 0.0, chr-F: 0.047",
"### System Info:\n\n\n* hf\\_name: afa-afa\n* source\\_languages: afa\n* target\\_languages: afa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']\n* src\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* tgt\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afa\n* tgt\\_alpha3: afa\n* short\\_pair: afa-afa\n* chrF2\\_score: 0.434\n* bleu: 20.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 15215.0\n* src\\_name: Afro-Asiatic languages\n* tgt\\_name: Afro-Asiatic languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: afa\n* tgt\\_alpha2: afa\n* prefer\\_old: False\n* long\\_pair: afa-afa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### afa-eng
* source group: Afro-Asiatic languages
* target group: English
* OPUS readme: [afa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md)
* model: transformer
* source language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.amh-eng.amh.eng | 35.9 | 0.550 |
| Tatoeba-test.ara-eng.ara.eng | 36.6 | 0.543 |
| Tatoeba-test.hau-eng.hau.eng | 11.9 | 0.327 |
| Tatoeba-test.heb-eng.heb.eng | 42.7 | 0.591 |
| Tatoeba-test.kab-eng.kab.eng | 4.3 | 0.213 |
| Tatoeba-test.mlt-eng.mlt.eng | 44.3 | 0.618 |
| Tatoeba-test.multi.eng | 27.1 | 0.464 |
| Tatoeba-test.rif-eng.rif.eng | 3.5 | 0.141 |
| Tatoeba-test.shy-eng.shy.eng | 0.6 | 0.125 |
| Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 |
| Tatoeba-test.tir-eng.tir.eng | 13.1 | 0.328 |
### System Info:
- hf_name: afa-eng
- source_languages: afa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en']
- src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt
- src_alpha3: afa
- tgt_alpha3: eng
- short_pair: afa-en
- chrF2_score: 0.46399999999999997
- bleu: 27.1
- brevity_penalty: 1.0
- ref_len: 69373.0
- src_name: Afro-Asiatic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: afa
- tgt_alpha2: en
- prefer_old: False
- long_pair: afa-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["so", "ti", "am", "he", "mt", "ar", "afa", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-afa-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #so #ti #am #he #mt #ar #afa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### afa-eng
* source group: Afro-Asiatic languages
* target group: English
* OPUS readme: afa-eng
* model: transformer
* source language(s): acm afb amh apc ara arq ary arz hau\_Latn heb kab mlt rif\_Latn shy\_Latn som tir
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.9, chr-F: 0.550
testset: URL, BLEU: 36.6, chr-F: 0.543
testset: URL, BLEU: 11.9, chr-F: 0.327
testset: URL, BLEU: 42.7, chr-F: 0.591
testset: URL, BLEU: 4.3, chr-F: 0.213
testset: URL, BLEU: 44.3, chr-F: 0.618
testset: URL, BLEU: 27.1, chr-F: 0.464
testset: URL, BLEU: 3.5, chr-F: 0.141
testset: URL, BLEU: 0.6, chr-F: 0.125
testset: URL, BLEU: 23.6, chr-F: 0.472
testset: URL, BLEU: 13.1, chr-F: 0.328
### System Info:
* hf\_name: afa-eng
* source\_languages: afa
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en']
* src\_constituents: {'som', 'rif\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\_Latn', 'acm', 'ary'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: afa
* tgt\_alpha3: eng
* short\_pair: afa-en
* chrF2\_score: 0.46399999999999997
* bleu: 27.1
* brevity\_penalty: 1.0
* ref\_len: 69373.0
* src\_name: Afro-Asiatic languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: afa
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: afa-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### afa-eng\n\n\n* source group: Afro-Asiatic languages\n* target group: English\n* OPUS readme: afa-eng\n* model: transformer\n* source language(s): acm afb amh apc ara arq ary arz hau\\_Latn heb kab mlt rif\\_Latn shy\\_Latn som tir\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.550\ntestset: URL, BLEU: 36.6, chr-F: 0.543\ntestset: URL, BLEU: 11.9, chr-F: 0.327\ntestset: URL, BLEU: 42.7, chr-F: 0.591\ntestset: URL, BLEU: 4.3, chr-F: 0.213\ntestset: URL, BLEU: 44.3, chr-F: 0.618\ntestset: URL, BLEU: 27.1, chr-F: 0.464\ntestset: URL, BLEU: 3.5, chr-F: 0.141\ntestset: URL, BLEU: 0.6, chr-F: 0.125\ntestset: URL, BLEU: 23.6, chr-F: 0.472\ntestset: URL, BLEU: 13.1, chr-F: 0.328",
"### System Info:\n\n\n* hf\\_name: afa-eng\n* source\\_languages: afa\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en']\n* src\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afa\n* tgt\\_alpha3: eng\n* short\\_pair: afa-en\n* chrF2\\_score: 0.46399999999999997\n* bleu: 27.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 69373.0\n* src\\_name: Afro-Asiatic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: afa\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: afa-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #so #ti #am #he #mt #ar #afa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### afa-eng\n\n\n* source group: Afro-Asiatic languages\n* target group: English\n* OPUS readme: afa-eng\n* model: transformer\n* source language(s): acm afb amh apc ara arq ary arz hau\\_Latn heb kab mlt rif\\_Latn shy\\_Latn som tir\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.550\ntestset: URL, BLEU: 36.6, chr-F: 0.543\ntestset: URL, BLEU: 11.9, chr-F: 0.327\ntestset: URL, BLEU: 42.7, chr-F: 0.591\ntestset: URL, BLEU: 4.3, chr-F: 0.213\ntestset: URL, BLEU: 44.3, chr-F: 0.618\ntestset: URL, BLEU: 27.1, chr-F: 0.464\ntestset: URL, BLEU: 3.5, chr-F: 0.141\ntestset: URL, BLEU: 0.6, chr-F: 0.125\ntestset: URL, BLEU: 23.6, chr-F: 0.472\ntestset: URL, BLEU: 13.1, chr-F: 0.328",
"### System Info:\n\n\n* hf\\_name: afa-eng\n* source\\_languages: afa\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en']\n* src\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afa\n* tgt\\_alpha3: eng\n* short\\_pair: afa-en\n* chrF2\\_score: 0.46399999999999997\n* bleu: 27.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 69373.0\n* src\\_name: Afro-Asiatic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: afa\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: afa-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### alv-eng
* source group: Atlantic-Congo languages
* target group: English
* OPUS readme: [alv-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md)
* model: transformer
* source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ewe-eng.ewe.eng | 6.3 | 0.328 |
| Tatoeba-test.ful-eng.ful.eng | 0.4 | 0.108 |
| Tatoeba-test.ibo-eng.ibo.eng | 4.5 | 0.196 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.511 |
| Tatoeba-test.lin-eng.lin.eng | 2.8 | 0.213 |
| Tatoeba-test.lug-eng.lug.eng | 3.4 | 0.140 |
| Tatoeba-test.multi.eng | 20.9 | 0.376 |
| Tatoeba-test.nya-eng.nya.eng | 38.7 | 0.492 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.417 |
| Tatoeba-test.sag-eng.sag.eng | 5.5 | 0.177 |
| Tatoeba-test.sna-eng.sna.eng | 26.9 | 0.412 |
| Tatoeba-test.swa-eng.swa.eng | 4.9 | 0.196 |
| Tatoeba-test.toi-eng.toi.eng | 3.9 | 0.147 |
| Tatoeba-test.tso-eng.tso.eng | 76.7 | 0.957 |
| Tatoeba-test.umb-eng.umb.eng | 4.0 | 0.195 |
| Tatoeba-test.wol-eng.wol.eng | 3.7 | 0.170 |
| Tatoeba-test.xho-eng.xho.eng | 38.9 | 0.556 |
| Tatoeba-test.yor-eng.yor.eng | 25.1 | 0.412 |
| Tatoeba-test.zul-eng.zul.eng | 46.1 | 0.623 |
### System Info:
- hf_name: alv-eng
- source_languages: alv
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']
- src_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt
- src_alpha3: alv
- tgt_alpha3: eng
- short_pair: alv-en
- chrF2_score: 0.376
- bleu: 20.9
- brevity_penalty: 1.0
- ref_len: 15208.0
- src_name: Atlantic-Congo languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: alv
- tgt_alpha2: en
- prefer_old: False
- long_pair: alv-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "alv", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-alv-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #alv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### alv-eng
* source group: Atlantic-Congo languages
* target group: English
* OPUS readme: alv-eng
* model: transformer
* source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 6.3, chr-F: 0.328
testset: URL, BLEU: 0.4, chr-F: 0.108
testset: URL, BLEU: 4.5, chr-F: 0.196
testset: URL, BLEU: 30.7, chr-F: 0.511
testset: URL, BLEU: 2.8, chr-F: 0.213
testset: URL, BLEU: 3.4, chr-F: 0.140
testset: URL, BLEU: 20.9, chr-F: 0.376
testset: URL, BLEU: 38.7, chr-F: 0.492
testset: URL, BLEU: 24.5, chr-F: 0.417
testset: URL, BLEU: 5.5, chr-F: 0.177
testset: URL, BLEU: 26.9, chr-F: 0.412
testset: URL, BLEU: 4.9, chr-F: 0.196
testset: URL, BLEU: 3.9, chr-F: 0.147
testset: URL, BLEU: 76.7, chr-F: 0.957
testset: URL, BLEU: 4.0, chr-F: 0.195
testset: URL, BLEU: 3.7, chr-F: 0.170
testset: URL, BLEU: 38.9, chr-F: 0.556
testset: URL, BLEU: 25.1, chr-F: 0.412
testset: URL, BLEU: 46.1, chr-F: 0.623
### System Info:
* hf\_name: alv-eng
* source\_languages: alv
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']
* src\_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\_Latn', 'umb'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: alv
* tgt\_alpha3: eng
* short\_pair: alv-en
* chrF2\_score: 0.376
* bleu: 20.9
* brevity\_penalty: 1.0
* ref\_len: 15208.0
* src\_name: Atlantic-Congo languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: alv
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: alv-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### alv-eng\n\n\n* source group: Atlantic-Congo languages\n* target group: English\n* OPUS readme: alv-eng\n* model: transformer\n* source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.3, chr-F: 0.328\ntestset: URL, BLEU: 0.4, chr-F: 0.108\ntestset: URL, BLEU: 4.5, chr-F: 0.196\ntestset: URL, BLEU: 30.7, chr-F: 0.511\ntestset: URL, BLEU: 2.8, chr-F: 0.213\ntestset: URL, BLEU: 3.4, chr-F: 0.140\ntestset: URL, BLEU: 20.9, chr-F: 0.376\ntestset: URL, BLEU: 38.7, chr-F: 0.492\ntestset: URL, BLEU: 24.5, chr-F: 0.417\ntestset: URL, BLEU: 5.5, chr-F: 0.177\ntestset: URL, BLEU: 26.9, chr-F: 0.412\ntestset: URL, BLEU: 4.9, chr-F: 0.196\ntestset: URL, BLEU: 3.9, chr-F: 0.147\ntestset: URL, BLEU: 76.7, chr-F: 0.957\ntestset: URL, BLEU: 4.0, chr-F: 0.195\ntestset: URL, BLEU: 3.7, chr-F: 0.170\ntestset: URL, BLEU: 38.9, chr-F: 0.556\ntestset: URL, BLEU: 25.1, chr-F: 0.412\ntestset: URL, BLEU: 46.1, chr-F: 0.623",
"### System Info:\n\n\n* hf\\_name: alv-eng\n* source\\_languages: alv\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']\n* src\\_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: alv\n* tgt\\_alpha3: eng\n* short\\_pair: alv-en\n* chrF2\\_score: 0.376\n* bleu: 20.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 15208.0\n* src\\_name: Atlantic-Congo languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: alv\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: alv-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #alv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### alv-eng\n\n\n* source group: Atlantic-Congo languages\n* target group: English\n* OPUS readme: alv-eng\n* model: transformer\n* source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.3, chr-F: 0.328\ntestset: URL, BLEU: 0.4, chr-F: 0.108\ntestset: URL, BLEU: 4.5, chr-F: 0.196\ntestset: URL, BLEU: 30.7, chr-F: 0.511\ntestset: URL, BLEU: 2.8, chr-F: 0.213\ntestset: URL, BLEU: 3.4, chr-F: 0.140\ntestset: URL, BLEU: 20.9, chr-F: 0.376\ntestset: URL, BLEU: 38.7, chr-F: 0.492\ntestset: URL, BLEU: 24.5, chr-F: 0.417\ntestset: URL, BLEU: 5.5, chr-F: 0.177\ntestset: URL, BLEU: 26.9, chr-F: 0.412\ntestset: URL, BLEU: 4.9, chr-F: 0.196\ntestset: URL, BLEU: 3.9, chr-F: 0.147\ntestset: URL, BLEU: 76.7, chr-F: 0.957\ntestset: URL, BLEU: 4.0, chr-F: 0.195\ntestset: URL, BLEU: 3.7, chr-F: 0.170\ntestset: URL, BLEU: 38.9, chr-F: 0.556\ntestset: URL, BLEU: 25.1, chr-F: 0.412\ntestset: URL, BLEU: 46.1, chr-F: 0.623",
"### System Info:\n\n\n* hf\\_name: alv-eng\n* source\\_languages: alv\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']\n* src\\_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: alv\n* tgt\\_alpha3: eng\n* short\\_pair: alv-en\n* chrF2\\_score: 0.376\n* bleu: 20.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 15208.0\n* src\\_name: Atlantic-Congo languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: alv\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: alv-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-am-sv
* source languages: am
* target languages: sv
* OPUS readme: [am-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/am-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.am.sv | 21.0 | 0.377 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-am-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"am",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #am #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-am-sv
* source languages: am
* target languages: sv
* OPUS readme: am-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.0, chr-F: 0.377
| [
"### opus-mt-am-sv\n\n\n* source languages: am\n* target languages: sv\n* OPUS readme: am-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.377"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #am #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-am-sv\n\n\n* source languages: am\n* target languages: sv\n* OPUS readme: am-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.377"
] |
translation | transformers |
### ara-deu
* source group: Arabic
* target group: German
* OPUS readme: [ara-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md)
* model: transformer-align
* source language(s): afb apc ara ara_Latn arq arz
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.deu | 44.7 | 0.629 |
### System Info:
- hf_name: ara-deu
- source_languages: ara
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'de']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: deu
- short_pair: ar-de
- chrF2_score: 0.629
- bleu: 44.7
- brevity_penalty: 0.986
- ref_len: 8371.0
- src_name: Arabic
- tgt_name: German
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: de
- prefer_old: False
- long_pair: ara-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"de"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-deu
* source group: Arabic
* target group: German
* OPUS readme: ara-deu
* model: transformer-align
* source language(s): afb apc ara ara\_Latn arq arz
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.7, chr-F: 0.629
### System Info:
* hf\_name: ara-deu
* source\_languages: ara
* target\_languages: deu
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'de']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'deu'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: deu
* short\_pair: ar-de
* chrF2\_score: 0.629
* bleu: 44.7
* brevity\_penalty: 0.986
* ref\_len: 8371.0
* src\_name: Arabic
* tgt\_name: German
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: de
* prefer\_old: False
* long\_pair: ara-deu
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-deu\n\n\n* source group: Arabic\n* target group: German\n* OPUS readme: ara-deu\n* model: transformer-align\n* source language(s): afb apc ara ara\\_Latn arq arz\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.7, chr-F: 0.629",
"### System Info:\n\n\n* hf\\_name: ara-deu\n* source\\_languages: ara\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'de']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: deu\n* short\\_pair: ar-de\n* chrF2\\_score: 0.629\n* bleu: 44.7\n* brevity\\_penalty: 0.986\n* ref\\_len: 8371.0\n* src\\_name: Arabic\n* tgt\\_name: German\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: ara-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-deu\n\n\n* source group: Arabic\n* target group: German\n* OPUS readme: ara-deu\n* model: transformer-align\n* source language(s): afb apc ara ara\\_Latn arq arz\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.7, chr-F: 0.629",
"### System Info:\n\n\n* hf\\_name: ara-deu\n* source\\_languages: ara\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'de']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: deu\n* short\\_pair: ar-de\n* chrF2\\_score: 0.629\n* bleu: 44.7\n* brevity\\_penalty: 0.986\n* ref\\_len: 8371.0\n* src\\_name: Arabic\n* tgt\\_name: German\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: ara-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ara-ell
* source group: Arabic
* target group: Modern Greek (1453-)
* OPUS readme: [ara-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md)
* model: transformer-align
* source language(s): ara arz
* target language(s): ell
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.ell | 43.9 | 0.636 |
### System Info:
- hf_name: ara-ell
- source_languages: ara
- target_languages: ell
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'el']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'ell'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: ell
- short_pair: ar-el
- chrF2_score: 0.636
- bleu: 43.9
- brevity_penalty: 0.993
- ref_len: 2009.0
- src_name: Arabic
- tgt_name: Modern Greek (1453-)
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: el
- prefer_old: False
- long_pair: ara-ell
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "el"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-el | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"el"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-ell
* source group: Arabic
* target group: Modern Greek (1453-)
* OPUS readme: ara-ell
* model: transformer-align
* source language(s): ara arz
* target language(s): ell
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.9, chr-F: 0.636
### System Info:
* hf\_name: ara-ell
* source\_languages: ara
* target\_languages: ell
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'el']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'ell'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: ell
* short\_pair: ar-el
* chrF2\_score: 0.636
* bleu: 43.9
* brevity\_penalty: 0.993
* ref\_len: 2009.0
* src\_name: Arabic
* tgt\_name: Modern Greek (1453-)
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: el
* prefer\_old: False
* long\_pair: ara-ell
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-ell\n\n\n* source group: Arabic\n* target group: Modern Greek (1453-)\n* OPUS readme: ara-ell\n* model: transformer-align\n* source language(s): ara arz\n* target language(s): ell\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.9, chr-F: 0.636",
"### System Info:\n\n\n* hf\\_name: ara-ell\n* source\\_languages: ara\n* target\\_languages: ell\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'el']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: ell\n* short\\_pair: ar-el\n* chrF2\\_score: 0.636\n* bleu: 43.9\n* brevity\\_penalty: 0.993\n* ref\\_len: 2009.0\n* src\\_name: Arabic\n* tgt\\_name: Modern Greek (1453-)\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: el\n* prefer\\_old: False\n* long\\_pair: ara-ell\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-ell\n\n\n* source group: Arabic\n* target group: Modern Greek (1453-)\n* OPUS readme: ara-ell\n* model: transformer-align\n* source language(s): ara arz\n* target language(s): ell\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.9, chr-F: 0.636",
"### System Info:\n\n\n* hf\\_name: ara-ell\n* source\\_languages: ara\n* target\\_languages: ell\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'el']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: ell\n* short\\_pair: ar-el\n* chrF2\\_score: 0.636\n* bleu: 43.9\n* brevity\\_penalty: 0.993\n* ref\\_len: 2009.0\n* src\\_name: Arabic\n* tgt\\_name: Modern Greek (1453-)\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: el\n* prefer\\_old: False\n* long\\_pair: ara-ell\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ar-en
* source languages: ar
* target languages: en
* OPUS readme: [ar-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ar.en | 49.4 | 0.661 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"ar",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #ar #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ar-en
* source languages: ar
* target languages: en
* OPUS readme: ar-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.4, chr-F: 0.661
| [
"### opus-mt-ar-en\n\n\n* source languages: ar\n* target languages: en\n* OPUS readme: ar-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.661"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #ar #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ar-en\n\n\n* source languages: ar\n* target languages: en\n* OPUS readme: ar-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.661"
] |
translation | transformers |
### ara-epo
* source group: Arabic
* target group: Esperanto
* OPUS readme: [ara-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md)
* model: transformer-align
* source language(s): apc apc_Latn ara arq arq_Latn arz
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.epo | 18.9 | 0.376 |
### System Info:
- hf_name: ara-epo
- source_languages: ara
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'eo']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt
- src_alpha3: ara
- tgt_alpha3: epo
- short_pair: ar-eo
- chrF2_score: 0.376
- bleu: 18.9
- brevity_penalty: 0.948
- ref_len: 4506.0
- src_name: Arabic
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ar
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ara-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-epo
* source group: Arabic
* target group: Esperanto
* OPUS readme: ara-epo
* model: transformer-align
* source language(s): apc apc\_Latn ara arq arq\_Latn arz
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.9, chr-F: 0.376
### System Info:
* hf\_name: ara-epo
* source\_languages: ara
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'eo']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: epo
* short\_pair: ar-eo
* chrF2\_score: 0.376
* bleu: 18.9
* brevity\_penalty: 0.948
* ref\_len: 4506.0
* src\_name: Arabic
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: ar
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: ara-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-epo\n\n\n* source group: Arabic\n* target group: Esperanto\n* OPUS readme: ara-epo\n* model: transformer-align\n* source language(s): apc apc\\_Latn ara arq arq\\_Latn arz\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.9, chr-F: 0.376",
"### System Info:\n\n\n* hf\\_name: ara-epo\n* source\\_languages: ara\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'eo']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: epo\n* short\\_pair: ar-eo\n* chrF2\\_score: 0.376\n* bleu: 18.9\n* brevity\\_penalty: 0.948\n* ref\\_len: 4506.0\n* src\\_name: Arabic\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ar\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ara-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-epo\n\n\n* source group: Arabic\n* target group: Esperanto\n* OPUS readme: ara-epo\n* model: transformer-align\n* source language(s): apc apc\\_Latn ara arq arq\\_Latn arz\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.9, chr-F: 0.376",
"### System Info:\n\n\n* hf\\_name: ara-epo\n* source\\_languages: ara\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'eo']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: epo\n* short\\_pair: ar-eo\n* chrF2\\_score: 0.376\n* bleu: 18.9\n* brevity\\_penalty: 0.948\n* ref\\_len: 4506.0\n* src\\_name: Arabic\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ar\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ara-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ara-spa
* source group: Arabic
* target group: Spanish
* OPUS readme: [ara-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md)
* model: transformer
* source language(s): apc apc_Latn ara arq
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.spa | 46.0 | 0.641 |
### System Info:
- hf_name: ara-spa
- source_languages: ara
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'es']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: spa
- short_pair: ar-es
- chrF2_score: 0.6409999999999999
- bleu: 46.0
- brevity_penalty: 0.9620000000000001
- ref_len: 9708.0
- src_name: Arabic
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: es
- prefer_old: False
- long_pair: ara-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-spa
* source group: Arabic
* target group: Spanish
* OPUS readme: ara-spa
* model: transformer
* source language(s): apc apc\_Latn ara arq
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 46.0, chr-F: 0.641
### System Info:
* hf\_name: ara-spa
* source\_languages: ara
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'es']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: spa
* short\_pair: ar-es
* chrF2\_score: 0.6409999999999999
* bleu: 46.0
* brevity\_penalty: 0.9620000000000001
* ref\_len: 9708.0
* src\_name: Arabic
* tgt\_name: Spanish
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: ara-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-spa\n\n\n* source group: Arabic\n* target group: Spanish\n* OPUS readme: ara-spa\n* model: transformer\n* source language(s): apc apc\\_Latn ara arq\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.0, chr-F: 0.641",
"### System Info:\n\n\n* hf\\_name: ara-spa\n* source\\_languages: ara\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'es']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: spa\n* short\\_pair: ar-es\n* chrF2\\_score: 0.6409999999999999\n* bleu: 46.0\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 9708.0\n* src\\_name: Arabic\n* tgt\\_name: Spanish\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: ara-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-spa\n\n\n* source group: Arabic\n* target group: Spanish\n* OPUS readme: ara-spa\n* model: transformer\n* source language(s): apc apc\\_Latn ara arq\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.0, chr-F: 0.641",
"### System Info:\n\n\n* hf\\_name: ara-spa\n* source\\_languages: ara\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'es']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: spa\n* short\\_pair: ar-es\n* chrF2\\_score: 0.6409999999999999\n* bleu: 46.0\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 9708.0\n* src\\_name: Arabic\n* tgt\\_name: Spanish\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: ara-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ar-fr
* source languages: ar
* target languages: fr
* OPUS readme: [ar-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ar.fr | 43.5 | 0.602 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ar-fr
* source languages: ar
* target languages: fr
* OPUS readme: ar-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.5, chr-F: 0.602
| [
"### opus-mt-ar-fr\n\n\n* source languages: ar\n* target languages: fr\n* OPUS readme: ar-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.5, chr-F: 0.602"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ar-fr\n\n\n* source languages: ar\n* target languages: fr\n* OPUS readme: ar-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.5, chr-F: 0.602"
] |
translation | transformers |
### ara-heb
* source group: Arabic
* target group: Hebrew
* OPUS readme: [ara-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md)
* model: transformer
* source language(s): apc apc_Latn ara arq arz
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.heb | 40.4 | 0.605 |
### System Info:
- hf_name: ara-heb
- source_languages: ara
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'he']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: heb
- short_pair: ar-he
- chrF2_score: 0.605
- bleu: 40.4
- brevity_penalty: 1.0
- ref_len: 6801.0
- src_name: Arabic
- tgt_name: Hebrew
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: he
- prefer_old: False
- long_pair: ara-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"he"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-heb
* source group: Arabic
* target group: Hebrew
* OPUS readme: ara-heb
* model: transformer
* source language(s): apc apc\_Latn ara arq arz
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.4, chr-F: 0.605
### System Info:
* hf\_name: ara-heb
* source\_languages: ara
* target\_languages: heb
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'he']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'heb'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: heb
* short\_pair: ar-he
* chrF2\_score: 0.605
* bleu: 40.4
* brevity\_penalty: 1.0
* ref\_len: 6801.0
* src\_name: Arabic
* tgt\_name: Hebrew
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: he
* prefer\_old: False
* long\_pair: ara-heb
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-heb\n\n\n* source group: Arabic\n* target group: Hebrew\n* OPUS readme: ara-heb\n* model: transformer\n* source language(s): apc apc\\_Latn ara arq arz\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.4, chr-F: 0.605",
"### System Info:\n\n\n* hf\\_name: ara-heb\n* source\\_languages: ara\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'he']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: heb\n* short\\_pair: ar-he\n* chrF2\\_score: 0.605\n* bleu: 40.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 6801.0\n* src\\_name: Arabic\n* tgt\\_name: Hebrew\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: ara-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-heb\n\n\n* source group: Arabic\n* target group: Hebrew\n* OPUS readme: ara-heb\n* model: transformer\n* source language(s): apc apc\\_Latn ara arq arz\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.4, chr-F: 0.605",
"### System Info:\n\n\n* hf\\_name: ara-heb\n* source\\_languages: ara\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'he']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: heb\n* short\\_pair: ar-he\n* chrF2\\_score: 0.605\n* bleu: 40.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 6801.0\n* src\\_name: Arabic\n* tgt\\_name: Hebrew\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: ara-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ara-ita
* source group: Arabic
* target group: Italian
* OPUS readme: [ara-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md)
* model: transformer
* source language(s): ara
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.ita | 44.2 | 0.658 |
### System Info:
- hf_name: ara-ita
- source_languages: ara
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'it']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: ita
- short_pair: ar-it
- chrF2_score: 0.6579999999999999
- bleu: 44.2
- brevity_penalty: 0.9890000000000001
- ref_len: 1495.0
- src_name: Arabic
- tgt_name: Italian
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: it
- prefer_old: False
- long_pair: ara-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-ita
* source group: Arabic
* target group: Italian
* OPUS readme: ara-ita
* model: transformer
* source language(s): ara
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.2, chr-F: 0.658
### System Info:
* hf\_name: ara-ita
* source\_languages: ara
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'it']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'ita'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: ita
* short\_pair: ar-it
* chrF2\_score: 0.6579999999999999
* bleu: 44.2
* brevity\_penalty: 0.9890000000000001
* ref\_len: 1495.0
* src\_name: Arabic
* tgt\_name: Italian
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: it
* prefer\_old: False
* long\_pair: ara-ita
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-ita\n\n\n* source group: Arabic\n* target group: Italian\n* OPUS readme: ara-ita\n* model: transformer\n* source language(s): ara\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.2, chr-F: 0.658",
"### System Info:\n\n\n* hf\\_name: ara-ita\n* source\\_languages: ara\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'it']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: ita\n* short\\_pair: ar-it\n* chrF2\\_score: 0.6579999999999999\n* bleu: 44.2\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 1495.0\n* src\\_name: Arabic\n* tgt\\_name: Italian\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: ara-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-ita\n\n\n* source group: Arabic\n* target group: Italian\n* OPUS readme: ara-ita\n* model: transformer\n* source language(s): ara\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.2, chr-F: 0.658",
"### System Info:\n\n\n* hf\\_name: ara-ita\n* source\\_languages: ara\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'it']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: ita\n* short\\_pair: ar-it\n* chrF2\\_score: 0.6579999999999999\n* bleu: 44.2\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 1495.0\n* src\\_name: Arabic\n* tgt\\_name: Italian\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: ara-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ara-pol
* source group: Arabic
* target group: Polish
* OPUS readme: [ara-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md)
* model: transformer
* source language(s): ara arz
* target language(s): pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.pol | 38.0 | 0.623 |
### System Info:
- hf_name: ara-pol
- source_languages: ara
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'pl']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: pol
- short_pair: ar-pl
- chrF2_score: 0.623
- bleu: 38.0
- brevity_penalty: 0.948
- ref_len: 1171.0
- src_name: Arabic
- tgt_name: Polish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: pl
- prefer_old: False
- long_pair: ara-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "pl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-pl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"pl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-pol
* source group: Arabic
* target group: Polish
* OPUS readme: ara-pol
* model: transformer
* source language(s): ara arz
* target language(s): pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.0, chr-F: 0.623
### System Info:
* hf\_name: ara-pol
* source\_languages: ara
* target\_languages: pol
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'pl']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'pol'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: pol
* short\_pair: ar-pl
* chrF2\_score: 0.623
* bleu: 38.0
* brevity\_penalty: 0.948
* ref\_len: 1171.0
* src\_name: Arabic
* tgt\_name: Polish
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: pl
* prefer\_old: False
* long\_pair: ara-pol
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-pol\n\n\n* source group: Arabic\n* target group: Polish\n* OPUS readme: ara-pol\n* model: transformer\n* source language(s): ara arz\n* target language(s): pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.623",
"### System Info:\n\n\n* hf\\_name: ara-pol\n* source\\_languages: ara\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'pl']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: pol\n* short\\_pair: ar-pl\n* chrF2\\_score: 0.623\n* bleu: 38.0\n* brevity\\_penalty: 0.948\n* ref\\_len: 1171.0\n* src\\_name: Arabic\n* tgt\\_name: Polish\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: ara-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-pol\n\n\n* source group: Arabic\n* target group: Polish\n* OPUS readme: ara-pol\n* model: transformer\n* source language(s): ara arz\n* target language(s): pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.623",
"### System Info:\n\n\n* hf\\_name: ara-pol\n* source\\_languages: ara\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'pl']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: pol\n* short\\_pair: ar-pl\n* chrF2\\_score: 0.623\n* bleu: 38.0\n* brevity\\_penalty: 0.948\n* ref\\_len: 1171.0\n* src\\_name: Arabic\n* tgt\\_name: Polish\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: ara-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ara-rus
* source group: Arabic
* target group: Russian
* OPUS readme: [ara-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md)
* model: transformer
* source language(s): apc ara arz
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.rus | 42.5 | 0.605 |
### System Info:
- hf_name: ara-rus
- source_languages: ara
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'ru']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: rus
- short_pair: ar-ru
- chrF2_score: 0.605
- bleu: 42.5
- brevity_penalty: 0.97
- ref_len: 21830.0
- src_name: Arabic
- tgt_name: Russian
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: ru
- prefer_old: False
- long_pair: ara-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-rus
* source group: Arabic
* target group: Russian
* OPUS readme: ara-rus
* model: transformer
* source language(s): apc ara arz
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 42.5, chr-F: 0.605
### System Info:
* hf\_name: ara-rus
* source\_languages: ara
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'ru']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'rus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: rus
* short\_pair: ar-ru
* chrF2\_score: 0.605
* bleu: 42.5
* brevity\_penalty: 0.97
* ref\_len: 21830.0
* src\_name: Arabic
* tgt\_name: Russian
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: ru
* prefer\_old: False
* long\_pair: ara-rus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-rus\n\n\n* source group: Arabic\n* target group: Russian\n* OPUS readme: ara-rus\n* model: transformer\n* source language(s): apc ara arz\n* target language(s): rus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.5, chr-F: 0.605",
"### System Info:\n\n\n* hf\\_name: ara-rus\n* source\\_languages: ara\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'ru']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: rus\n* short\\_pair: ar-ru\n* chrF2\\_score: 0.605\n* bleu: 42.5\n* brevity\\_penalty: 0.97\n* ref\\_len: 21830.0\n* src\\_name: Arabic\n* tgt\\_name: Russian\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: ara-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-rus\n\n\n* source group: Arabic\n* target group: Russian\n* OPUS readme: ara-rus\n* model: transformer\n* source language(s): apc ara arz\n* target language(s): rus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.5, chr-F: 0.605",
"### System Info:\n\n\n* hf\\_name: ara-rus\n* source\\_languages: ara\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'ru']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: rus\n* short\\_pair: ar-ru\n* chrF2\\_score: 0.605\n* bleu: 42.5\n* brevity\\_penalty: 0.97\n* ref\\_len: 21830.0\n* src\\_name: Arabic\n* tgt\\_name: Russian\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: ara-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ara-tur
* source group: Arabic
* target group: Turkish
* OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md)
* model: transformer
* source language(s): apc_Latn ara ara_Latn arq_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.tur | 33.1 | 0.619 |
### System Info:
- hf_name: ara-tur
- source_languages: ara
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'tr']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: tur
- short_pair: ar-tr
- chrF2_score: 0.619
- bleu: 33.1
- brevity_penalty: 0.9570000000000001
- ref_len: 6949.0
- src_name: Arabic
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: tr
- prefer_old: False
- long_pair: ara-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ar", "tr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ar-tr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ar",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar",
"tr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ar #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ara-tur
* source group: Arabic
* target group: Turkish
* OPUS readme: ara-tur
* model: transformer
* source language(s): apc\_Latn ara ara\_Latn arq\_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.1, chr-F: 0.619
### System Info:
* hf\_name: ara-tur
* source\_languages: ara
* target\_languages: tur
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ar', 'tr']
* src\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* tgt\_constituents: {'tur'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ara
* tgt\_alpha3: tur
* short\_pair: ar-tr
* chrF2\_score: 0.619
* bleu: 33.1
* brevity\_penalty: 0.9570000000000001
* ref\_len: 6949.0
* src\_name: Arabic
* tgt\_name: Turkish
* train\_date: 2020-07-03
* src\_alpha2: ar
* tgt\_alpha2: tr
* prefer\_old: False
* long\_pair: ara-tur
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ara-tur\n\n\n* source group: Arabic\n* target group: Turkish\n* OPUS readme: ara-tur\n* model: transformer\n* source language(s): apc\\_Latn ara ara\\_Latn arq\\_Latn\n* target language(s): tur\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.619",
"### System Info:\n\n\n* hf\\_name: ara-tur\n* source\\_languages: ara\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'tr']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: tur\n* short\\_pair: ar-tr\n* chrF2\\_score: 0.619\n* bleu: 33.1\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 6949.0\n* src\\_name: Arabic\n* tgt\\_name: Turkish\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: ara-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ar #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ara-tur\n\n\n* source group: Arabic\n* target group: Turkish\n* OPUS readme: ara-tur\n* model: transformer\n* source language(s): apc\\_Latn ara ara\\_Latn arq\\_Latn\n* target language(s): tur\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.619",
"### System Info:\n\n\n* hf\\_name: ara-tur\n* source\\_languages: ara\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ar', 'tr']\n* src\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ara\n* tgt\\_alpha3: tur\n* short\\_pair: ar-tr\n* chrF2\\_score: 0.619\n* bleu: 33.1\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 6949.0\n* src\\_name: Arabic\n* tgt\\_name: Turkish\n* train\\_date: 2020-07-03\n* src\\_alpha2: ar\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: ara-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### art-eng
* source group: Artificial languages
* target group: English
* OPUS readme: [art-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md)
* model: transformer
* source language(s): afh_Latn avk_Latn dws_Latn epo ido ido_Latn ile_Latn ina_Latn jbo jbo_Cyrl jbo_Latn ldn_Latn lfn_Cyrl lfn_Latn nov_Latn qya qya_Latn sjn_Latn tlh_Latn tzl tzl_Latn vol_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afh-eng.afh.eng | 1.2 | 0.099 |
| Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.105 |
| Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.076 |
| Tatoeba-test.epo-eng.epo.eng | 34.6 | 0.530 |
| Tatoeba-test.ido-eng.ido.eng | 12.7 | 0.310 |
| Tatoeba-test.ile-eng.ile.eng | 4.6 | 0.218 |
| Tatoeba-test.ina-eng.ina.eng | 5.8 | 0.254 |
| Tatoeba-test.jbo-eng.jbo.eng | 0.2 | 0.115 |
| Tatoeba-test.ldn-eng.ldn.eng | 0.7 | 0.083 |
| Tatoeba-test.lfn-eng.lfn.eng | 1.8 | 0.172 |
| Tatoeba-test.multi.eng | 11.6 | 0.287 |
| Tatoeba-test.nov-eng.nov.eng | 5.1 | 0.215 |
| Tatoeba-test.qya-eng.qya.eng | 0.7 | 0.113 |
| Tatoeba-test.sjn-eng.sjn.eng | 0.9 | 0.090 |
| Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.124 |
| Tatoeba-test.tzl-eng.tzl.eng | 1.4 | 0.109 |
| Tatoeba-test.vol-eng.vol.eng | 0.5 | 0.115 |
### System Info:
- hf_name: art-eng
- source_languages: art
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'io', 'art', 'en']
- src_constituents: {'sjn_Latn', 'tzl', 'vol_Latn', 'qya', 'tlh_Latn', 'ile_Latn', 'ido_Latn', 'tzl_Latn', 'jbo_Cyrl', 'jbo', 'lfn_Latn', 'nov_Latn', 'dws_Latn', 'ldn_Latn', 'avk_Latn', 'lfn_Cyrl', 'ina_Latn', 'jbo_Latn', 'epo', 'afh_Latn', 'qya_Latn', 'ido'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt
- src_alpha3: art
- tgt_alpha3: eng
- short_pair: art-en
- chrF2_score: 0.287
- bleu: 11.6
- brevity_penalty: 1.0
- ref_len: 73037.0
- src_name: Artificial languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: art
- tgt_alpha2: en
- prefer_old: False
- long_pair: art-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "io", "art", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-art-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"io",
"art",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"io",
"art",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #io #art #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### art-eng
* source group: Artificial languages
* target group: English
* OPUS readme: art-eng
* model: transformer
* source language(s): afh\_Latn avk\_Latn dws\_Latn epo ido ido\_Latn ile\_Latn ina\_Latn jbo jbo\_Cyrl jbo\_Latn ldn\_Latn lfn\_Cyrl lfn\_Latn nov\_Latn qya qya\_Latn sjn\_Latn tlh\_Latn tzl tzl\_Latn vol\_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 1.2, chr-F: 0.099
testset: URL, BLEU: 0.4, chr-F: 0.105
testset: URL, BLEU: 1.6, chr-F: 0.076
testset: URL, BLEU: 34.6, chr-F: 0.530
testset: URL, BLEU: 12.7, chr-F: 0.310
testset: URL, BLEU: 4.6, chr-F: 0.218
testset: URL, BLEU: 5.8, chr-F: 0.254
testset: URL, BLEU: 0.2, chr-F: 0.115
testset: URL, BLEU: 0.7, chr-F: 0.083
testset: URL, BLEU: 1.8, chr-F: 0.172
testset: URL, BLEU: 11.6, chr-F: 0.287
testset: URL, BLEU: 5.1, chr-F: 0.215
testset: URL, BLEU: 0.7, chr-F: 0.113
testset: URL, BLEU: 0.9, chr-F: 0.090
testset: URL, BLEU: 0.2, chr-F: 0.124
testset: URL, BLEU: 1.4, chr-F: 0.109
testset: URL, BLEU: 0.5, chr-F: 0.115
### System Info:
* hf\_name: art-eng
* source\_languages: art
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'io', 'art', 'en']
* src\_constituents: {'sjn\_Latn', 'tzl', 'vol\_Latn', 'qya', 'tlh\_Latn', 'ile\_Latn', 'ido\_Latn', 'tzl\_Latn', 'jbo\_Cyrl', 'jbo', 'lfn\_Latn', 'nov\_Latn', 'dws\_Latn', 'ldn\_Latn', 'avk\_Latn', 'lfn\_Cyrl', 'ina\_Latn', 'jbo\_Latn', 'epo', 'afh\_Latn', 'qya\_Latn', 'ido'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: art
* tgt\_alpha3: eng
* short\_pair: art-en
* chrF2\_score: 0.287
* bleu: 11.6
* brevity\_penalty: 1.0
* ref\_len: 73037.0
* src\_name: Artificial languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: art
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: art-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### art-eng\n\n\n* source group: Artificial languages\n* target group: English\n* OPUS readme: art-eng\n* model: transformer\n* source language(s): afh\\_Latn avk\\_Latn dws\\_Latn epo ido ido\\_Latn ile\\_Latn ina\\_Latn jbo jbo\\_Cyrl jbo\\_Latn ldn\\_Latn lfn\\_Cyrl lfn\\_Latn nov\\_Latn qya qya\\_Latn sjn\\_Latn tlh\\_Latn tzl tzl\\_Latn vol\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 1.2, chr-F: 0.099\ntestset: URL, BLEU: 0.4, chr-F: 0.105\ntestset: URL, BLEU: 1.6, chr-F: 0.076\ntestset: URL, BLEU: 34.6, chr-F: 0.530\ntestset: URL, BLEU: 12.7, chr-F: 0.310\ntestset: URL, BLEU: 4.6, chr-F: 0.218\ntestset: URL, BLEU: 5.8, chr-F: 0.254\ntestset: URL, BLEU: 0.2, chr-F: 0.115\ntestset: URL, BLEU: 0.7, chr-F: 0.083\ntestset: URL, BLEU: 1.8, chr-F: 0.172\ntestset: URL, BLEU: 11.6, chr-F: 0.287\ntestset: URL, BLEU: 5.1, chr-F: 0.215\ntestset: URL, BLEU: 0.7, chr-F: 0.113\ntestset: URL, BLEU: 0.9, chr-F: 0.090\ntestset: URL, BLEU: 0.2, chr-F: 0.124\ntestset: URL, BLEU: 1.4, chr-F: 0.109\ntestset: URL, BLEU: 0.5, chr-F: 0.115",
"### System Info:\n\n\n* hf\\_name: art-eng\n* source\\_languages: art\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'io', 'art', 'en']\n* src\\_constituents: {'sjn\\_Latn', 'tzl', 'vol\\_Latn', 'qya', 'tlh\\_Latn', 'ile\\_Latn', 'ido\\_Latn', 'tzl\\_Latn', 'jbo\\_Cyrl', 'jbo', 'lfn\\_Latn', 'nov\\_Latn', 'dws\\_Latn', 'ldn\\_Latn', 'avk\\_Latn', 'lfn\\_Cyrl', 'ina\\_Latn', 'jbo\\_Latn', 'epo', 'afh\\_Latn', 'qya\\_Latn', 'ido'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: art\n* tgt\\_alpha3: eng\n* short\\_pair: art-en\n* chrF2\\_score: 0.287\n* bleu: 11.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 73037.0\n* src\\_name: Artificial languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: art\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: art-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #io #art #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### art-eng\n\n\n* source group: Artificial languages\n* target group: English\n* OPUS readme: art-eng\n* model: transformer\n* source language(s): afh\\_Latn avk\\_Latn dws\\_Latn epo ido ido\\_Latn ile\\_Latn ina\\_Latn jbo jbo\\_Cyrl jbo\\_Latn ldn\\_Latn lfn\\_Cyrl lfn\\_Latn nov\\_Latn qya qya\\_Latn sjn\\_Latn tlh\\_Latn tzl tzl\\_Latn vol\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 1.2, chr-F: 0.099\ntestset: URL, BLEU: 0.4, chr-F: 0.105\ntestset: URL, BLEU: 1.6, chr-F: 0.076\ntestset: URL, BLEU: 34.6, chr-F: 0.530\ntestset: URL, BLEU: 12.7, chr-F: 0.310\ntestset: URL, BLEU: 4.6, chr-F: 0.218\ntestset: URL, BLEU: 5.8, chr-F: 0.254\ntestset: URL, BLEU: 0.2, chr-F: 0.115\ntestset: URL, BLEU: 0.7, chr-F: 0.083\ntestset: URL, BLEU: 1.8, chr-F: 0.172\ntestset: URL, BLEU: 11.6, chr-F: 0.287\ntestset: URL, BLEU: 5.1, chr-F: 0.215\ntestset: URL, BLEU: 0.7, chr-F: 0.113\ntestset: URL, BLEU: 0.9, chr-F: 0.090\ntestset: URL, BLEU: 0.2, chr-F: 0.124\ntestset: URL, BLEU: 1.4, chr-F: 0.109\ntestset: URL, BLEU: 0.5, chr-F: 0.115",
"### System Info:\n\n\n* hf\\_name: art-eng\n* source\\_languages: art\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'io', 'art', 'en']\n* src\\_constituents: {'sjn\\_Latn', 'tzl', 'vol\\_Latn', 'qya', 'tlh\\_Latn', 'ile\\_Latn', 'ido\\_Latn', 'tzl\\_Latn', 'jbo\\_Cyrl', 'jbo', 'lfn\\_Latn', 'nov\\_Latn', 'dws\\_Latn', 'ldn\\_Latn', 'avk\\_Latn', 'lfn\\_Cyrl', 'ina\\_Latn', 'jbo\\_Latn', 'epo', 'afh\\_Latn', 'qya\\_Latn', 'ido'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: art\n* tgt\\_alpha3: eng\n* short\\_pair: art-en\n* chrF2\\_score: 0.287\n* bleu: 11.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 73037.0\n* src\\_name: Artificial languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: art\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: art-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ase-de
* source languages: ase
* target languages: de
* OPUS readme: [ase-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.de | 27.2 | 0.478 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ase-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ase",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ase #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ase-de
* source languages: ase
* target languages: de
* OPUS readme: ase-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.2, chr-F: 0.478
| [
"### opus-mt-ase-de\n\n\n* source languages: ase\n* target languages: de\n* OPUS readme: ase-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.478"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ase #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ase-de\n\n\n* source languages: ase\n* target languages: de\n* OPUS readme: ase-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.478"
] |
translation | transformers |
### opus-mt-ase-en
* source languages: ase
* target languages: en
* OPUS readme: [ase-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.en | 99.5 | 0.997 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ase-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ase",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ase #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ase-en
* source languages: ase
* target languages: en
* OPUS readme: ase-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 99.5, chr-F: 0.997
| [
"### opus-mt-ase-en\n\n\n* source languages: ase\n* target languages: en\n* OPUS readme: ase-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 99.5, chr-F: 0.997"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ase #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ase-en\n\n\n* source languages: ase\n* target languages: en\n* OPUS readme: ase-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 99.5, chr-F: 0.997"
] |
translation | transformers |
### opus-mt-ase-es
* source languages: ase
* target languages: es
* OPUS readme: [ase-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.es | 31.7 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ase-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ase",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ase #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ase-es
* source languages: ase
* target languages: es
* OPUS readme: ase-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.7, chr-F: 0.498
| [
"### opus-mt-ase-es\n\n\n* source languages: ase\n* target languages: es\n* OPUS readme: ase-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ase #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ase-es\n\n\n* source languages: ase\n* target languages: es\n* OPUS readme: ase-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-ase-fr
* source languages: ase
* target languages: fr
* OPUS readme: [ase-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.fr | 37.8 | 0.553 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ase-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ase",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ase #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ase-fr
* source languages: ase
* target languages: fr
* OPUS readme: ase-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.8, chr-F: 0.553
| [
"### opus-mt-ase-fr\n\n\n* source languages: ase\n* target languages: fr\n* OPUS readme: ase-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.553"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ase #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ase-fr\n\n\n* source languages: ase\n* target languages: fr\n* OPUS readme: ase-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.553"
] |
translation | transformers |
### opus-mt-ase-sv
* source languages: ase
* target languages: sv
* OPUS readme: [ase-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.sv | 39.7 | 0.576 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ase-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ase",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ase #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ase-sv
* source languages: ase
* target languages: sv
* OPUS readme: ase-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.7, chr-F: 0.576
| [
"### opus-mt-ase-sv\n\n\n* source languages: ase\n* target languages: sv\n* OPUS readme: ase-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.7, chr-F: 0.576"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ase #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ase-sv\n\n\n* source languages: ase\n* target languages: sv\n* OPUS readme: ase-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.7, chr-F: 0.576"
] |
translation | transformers |
### aze-eng
* source group: Azerbaijani
* target group: English
* OPUS readme: [aze-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.eng | 31.9 | 0.490 |
### System Info:
- hf_name: aze-eng
- source_languages: aze
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'en']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: eng
- short_pair: az-en
- chrF2_score: 0.49
- bleu: 31.9
- brevity_penalty: 0.997
- ref_len: 16165.0
- src_name: Azerbaijani
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: en
- prefer_old: False
- long_pair: aze-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["az", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-az-en | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"az",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"az",
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #az #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### aze-eng
* source group: Azerbaijani
* target group: English
* OPUS readme: aze-eng
* model: transformer-align
* source language(s): aze\_Latn
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.9, chr-F: 0.490
### System Info:
* hf\_name: aze-eng
* source\_languages: aze
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['az', 'en']
* src\_constituents: {'aze\_Latn'}
* tgt\_constituents: {'eng'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: aze
* tgt\_alpha3: eng
* short\_pair: az-en
* chrF2\_score: 0.49
* bleu: 31.9
* brevity\_penalty: 0.997
* ref\_len: 16165.0
* src\_name: Azerbaijani
* tgt\_name: English
* train\_date: 2020-06-16
* src\_alpha2: az
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: aze-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### aze-eng\n\n\n* source group: Azerbaijani\n* target group: English\n* OPUS readme: aze-eng\n* model: transformer-align\n* source language(s): aze\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.9, chr-F: 0.490",
"### System Info:\n\n\n* hf\\_name: aze-eng\n* source\\_languages: aze\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['az', 'en']\n* src\\_constituents: {'aze\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aze\n* tgt\\_alpha3: eng\n* short\\_pair: az-en\n* chrF2\\_score: 0.49\n* bleu: 31.9\n* brevity\\_penalty: 0.997\n* ref\\_len: 16165.0\n* src\\_name: Azerbaijani\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: az\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: aze-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #az #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### aze-eng\n\n\n* source group: Azerbaijani\n* target group: English\n* OPUS readme: aze-eng\n* model: transformer-align\n* source language(s): aze\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.9, chr-F: 0.490",
"### System Info:\n\n\n* hf\\_name: aze-eng\n* source\\_languages: aze\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['az', 'en']\n* src\\_constituents: {'aze\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aze\n* tgt\\_alpha3: eng\n* short\\_pair: az-en\n* chrF2\\_score: 0.49\n* bleu: 31.9\n* brevity\\_penalty: 0.997\n* ref\\_len: 16165.0\n* src\\_name: Azerbaijani\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: az\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: aze-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### aze-spa
* source group: Azerbaijani
* target group: Spanish
* OPUS readme: [aze-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.spa | 11.8 | 0.346 |
### System Info:
- hf_name: aze-spa
- source_languages: aze
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'es']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: spa
- short_pair: az-es
- chrF2_score: 0.34600000000000003
- bleu: 11.8
- brevity_penalty: 1.0
- ref_len: 1144.0
- src_name: Azerbaijani
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: es
- prefer_old: False
- long_pair: aze-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["az", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-az-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"az",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"az",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #az #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### aze-spa
* source group: Azerbaijani
* target group: Spanish
* OPUS readme: aze-spa
* model: transformer-align
* source language(s): aze\_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.8, chr-F: 0.346
### System Info:
* hf\_name: aze-spa
* source\_languages: aze
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['az', 'es']
* src\_constituents: {'aze\_Latn'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: aze
* tgt\_alpha3: spa
* short\_pair: az-es
* chrF2\_score: 0.34600000000000003
* bleu: 11.8
* brevity\_penalty: 1.0
* ref\_len: 1144.0
* src\_name: Azerbaijani
* tgt\_name: Spanish
* train\_date: 2020-06-16
* src\_alpha2: az
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: aze-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### aze-spa\n\n\n* source group: Azerbaijani\n* target group: Spanish\n* OPUS readme: aze-spa\n* model: transformer-align\n* source language(s): aze\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.346",
"### System Info:\n\n\n* hf\\_name: aze-spa\n* source\\_languages: aze\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['az', 'es']\n* src\\_constituents: {'aze\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aze\n* tgt\\_alpha3: spa\n* short\\_pair: az-es\n* chrF2\\_score: 0.34600000000000003\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 1144.0\n* src\\_name: Azerbaijani\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: az\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: aze-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #az #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### aze-spa\n\n\n* source group: Azerbaijani\n* target group: Spanish\n* OPUS readme: aze-spa\n* model: transformer-align\n* source language(s): aze\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.346",
"### System Info:\n\n\n* hf\\_name: aze-spa\n* source\\_languages: aze\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['az', 'es']\n* src\\_constituents: {'aze\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aze\n* tgt\\_alpha3: spa\n* short\\_pair: az-es\n* chrF2\\_score: 0.34600000000000003\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 1144.0\n* src\\_name: Azerbaijani\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: az\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: aze-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### aze-tur
* source group: Azerbaijani
* target group: Turkish
* OPUS readme: [aze-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.tur | 24.4 | 0.529 |
### System Info:
- hf_name: aze-tur
- source_languages: aze
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'tr']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: tur
- short_pair: az-tr
- chrF2_score: 0.529
- bleu: 24.4
- brevity_penalty: 0.956
- ref_len: 5380.0
- src_name: Azerbaijani
- tgt_name: Turkish
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: tr
- prefer_old: False
- long_pair: aze-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["az", "tr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-az-tr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"az",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"az",
"tr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #az #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### aze-tur
* source group: Azerbaijani
* target group: Turkish
* OPUS readme: aze-tur
* model: transformer-align
* source language(s): aze\_Latn
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.4, chr-F: 0.529
### System Info:
* hf\_name: aze-tur
* source\_languages: aze
* target\_languages: tur
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['az', 'tr']
* src\_constituents: {'aze\_Latn'}
* tgt\_constituents: {'tur'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: aze
* tgt\_alpha3: tur
* short\_pair: az-tr
* chrF2\_score: 0.529
* bleu: 24.4
* brevity\_penalty: 0.956
* ref\_len: 5380.0
* src\_name: Azerbaijani
* tgt\_name: Turkish
* train\_date: 2020-06-16
* src\_alpha2: az
* tgt\_alpha2: tr
* prefer\_old: False
* long\_pair: aze-tur
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### aze-tur\n\n\n* source group: Azerbaijani\n* target group: Turkish\n* OPUS readme: aze-tur\n* model: transformer-align\n* source language(s): aze\\_Latn\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.529",
"### System Info:\n\n\n* hf\\_name: aze-tur\n* source\\_languages: aze\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['az', 'tr']\n* src\\_constituents: {'aze\\_Latn'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aze\n* tgt\\_alpha3: tur\n* short\\_pair: az-tr\n* chrF2\\_score: 0.529\n* bleu: 24.4\n* brevity\\_penalty: 0.956\n* ref\\_len: 5380.0\n* src\\_name: Azerbaijani\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-16\n* src\\_alpha2: az\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: aze-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #az #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### aze-tur\n\n\n* source group: Azerbaijani\n* target group: Turkish\n* OPUS readme: aze-tur\n* model: transformer-align\n* source language(s): aze\\_Latn\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.529",
"### System Info:\n\n\n* hf\\_name: aze-tur\n* source\\_languages: aze\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['az', 'tr']\n* src\\_constituents: {'aze\\_Latn'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: aze\n* tgt\\_alpha3: tur\n* short\\_pair: az-tr\n* chrF2\\_score: 0.529\n* bleu: 24.4\n* brevity\\_penalty: 0.956\n* ref\\_len: 5380.0\n* src\\_name: Azerbaijani\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-16\n* src\\_alpha2: az\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: aze-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### bat-eng
* source group: Baltic languages
* target group: English
* OPUS readme: [bat-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md)
* model: transformer
* source language(s): lav lit ltg prg_Latn sgs
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enlv-laveng.lav.eng | 27.5 | 0.566 |
| newsdev2019-enlt-liteng.lit.eng | 27.8 | 0.557 |
| newstest2017-enlv-laveng.lav.eng | 21.1 | 0.512 |
| newstest2019-lten-liteng.lit.eng | 30.2 | 0.592 |
| Tatoeba-test.lav-eng.lav.eng | 51.5 | 0.687 |
| Tatoeba-test.lit-eng.lit.eng | 55.1 | 0.703 |
| Tatoeba-test.multi.eng | 50.6 | 0.662 |
| Tatoeba-test.prg-eng.prg.eng | 1.0 | 0.159 |
| Tatoeba-test.sgs-eng.sgs.eng | 16.5 | 0.265 |
### System Info:
- hf_name: bat-eng
- source_languages: bat
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'lv', 'bat', 'en']
- src_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt
- src_alpha3: bat
- tgt_alpha3: eng
- short_pair: bat-en
- chrF2_score: 0.662
- bleu: 50.6
- brevity_penalty: 0.9890000000000001
- ref_len: 30772.0
- src_name: Baltic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: bat
- tgt_alpha2: en
- prefer_old: False
- long_pair: bat-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["lt", "lv", "bat", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bat-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lt",
"lv",
"bat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"lt",
"lv",
"bat",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #lt #lv #bat #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bat-eng
* source group: Baltic languages
* target group: English
* OPUS readme: bat-eng
* model: transformer
* source language(s): lav lit ltg prg\_Latn sgs
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.5, chr-F: 0.566
testset: URL, BLEU: 27.8, chr-F: 0.557
testset: URL, BLEU: 21.1, chr-F: 0.512
testset: URL, BLEU: 30.2, chr-F: 0.592
testset: URL, BLEU: 51.5, chr-F: 0.687
testset: URL, BLEU: 55.1, chr-F: 0.703
testset: URL, BLEU: 50.6, chr-F: 0.662
testset: URL, BLEU: 1.0, chr-F: 0.159
testset: URL, BLEU: 16.5, chr-F: 0.265
### System Info:
* hf\_name: bat-eng
* source\_languages: bat
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['lt', 'lv', 'bat', 'en']
* src\_constituents: {'lit', 'lav', 'prg\_Latn', 'ltg', 'sgs'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bat
* tgt\_alpha3: eng
* short\_pair: bat-en
* chrF2\_score: 0.662
* bleu: 50.6
* brevity\_penalty: 0.9890000000000001
* ref\_len: 30772.0
* src\_name: Baltic languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: bat
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: bat-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bat-eng\n\n\n* source group: Baltic languages\n* target group: English\n* OPUS readme: bat-eng\n* model: transformer\n* source language(s): lav lit ltg prg\\_Latn sgs\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.566\ntestset: URL, BLEU: 27.8, chr-F: 0.557\ntestset: URL, BLEU: 21.1, chr-F: 0.512\ntestset: URL, BLEU: 30.2, chr-F: 0.592\ntestset: URL, BLEU: 51.5, chr-F: 0.687\ntestset: URL, BLEU: 55.1, chr-F: 0.703\ntestset: URL, BLEU: 50.6, chr-F: 0.662\ntestset: URL, BLEU: 1.0, chr-F: 0.159\ntestset: URL, BLEU: 16.5, chr-F: 0.265",
"### System Info:\n\n\n* hf\\_name: bat-eng\n* source\\_languages: bat\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'lv', 'bat', 'en']\n* src\\_constituents: {'lit', 'lav', 'prg\\_Latn', 'ltg', 'sgs'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bat\n* tgt\\_alpha3: eng\n* short\\_pair: bat-en\n* chrF2\\_score: 0.662\n* bleu: 50.6\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 30772.0\n* src\\_name: Baltic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: bat\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: bat-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #lv #bat #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bat-eng\n\n\n* source group: Baltic languages\n* target group: English\n* OPUS readme: bat-eng\n* model: transformer\n* source language(s): lav lit ltg prg\\_Latn sgs\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.566\ntestset: URL, BLEU: 27.8, chr-F: 0.557\ntestset: URL, BLEU: 21.1, chr-F: 0.512\ntestset: URL, BLEU: 30.2, chr-F: 0.592\ntestset: URL, BLEU: 51.5, chr-F: 0.687\ntestset: URL, BLEU: 55.1, chr-F: 0.703\ntestset: URL, BLEU: 50.6, chr-F: 0.662\ntestset: URL, BLEU: 1.0, chr-F: 0.159\ntestset: URL, BLEU: 16.5, chr-F: 0.265",
"### System Info:\n\n\n* hf\\_name: bat-eng\n* source\\_languages: bat\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'lv', 'bat', 'en']\n* src\\_constituents: {'lit', 'lav', 'prg\\_Latn', 'ltg', 'sgs'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bat\n* tgt\\_alpha3: eng\n* short\\_pair: bat-en\n* chrF2\\_score: 0.662\n* bleu: 50.6\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 30772.0\n* src\\_name: Baltic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: bat\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: bat-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-bcl-de
* source languages: bcl
* target languages: de
* OPUS readme: [bcl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.de | 30.3 | 0.510 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bcl-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bcl-de
* source languages: bcl
* target languages: de
* OPUS readme: bcl-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.3, chr-F: 0.510
| [
"### opus-mt-bcl-de\n\n\n* source languages: bcl\n* target languages: de\n* OPUS readme: bcl-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.510"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bcl-de\n\n\n* source languages: bcl\n* target languages: de\n* OPUS readme: bcl-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.510"
] |
translation | transformers |
### opus-mt-bcl-en
* source languages: bcl
* target languages: en
* OPUS readme: [bcl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.zip)
* test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.test.txt)
* test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.en | 56.1 | 0.697 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bcl-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bcl-en
* source languages: bcl
* target languages: en
* OPUS readme: bcl-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 56.1, chr-F: 0.697
| [
"### opus-mt-bcl-en\n\n\n* source languages: bcl\n* target languages: en\n* OPUS readme: bcl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.1, chr-F: 0.697"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bcl-en\n\n\n* source languages: bcl\n* target languages: en\n* OPUS readme: bcl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.1, chr-F: 0.697"
] |
translation | transformers |
### opus-mt-bcl-es
* source languages: bcl
* target languages: es
* OPUS readme: [bcl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.es | 37.0 | 0.551 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bcl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bcl-es
* source languages: bcl
* target languages: es
* OPUS readme: bcl-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.0, chr-F: 0.551
| [
"### opus-mt-bcl-es\n\n\n* source languages: bcl\n* target languages: es\n* OPUS readme: bcl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.551"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bcl-es\n\n\n* source languages: bcl\n* target languages: es\n* OPUS readme: bcl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.551"
] |
translation | transformers |
### opus-mt-bcl-fi
* source languages: bcl
* target languages: fi
* OPUS readme: [bcl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fi | 33.3 | 0.573 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bcl-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bcl-fi
* source languages: bcl
* target languages: fi
* OPUS readme: bcl-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.3, chr-F: 0.573
| [
"### opus-mt-bcl-fi\n\n\n* source languages: bcl\n* target languages: fi\n* OPUS readme: bcl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.573"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bcl-fi\n\n\n* source languages: bcl\n* target languages: fi\n* OPUS readme: bcl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.573"
] |
translation | transformers |
### opus-mt-bcl-fr
* source languages: bcl
* target languages: fr
* OPUS readme: [bcl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fr | 35.0 | 0.527 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bcl-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bcl-fr
* source languages: bcl
* target languages: fr
* OPUS readme: bcl-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.0, chr-F: 0.527
| [
"### opus-mt-bcl-fr\n\n\n* source languages: bcl\n* target languages: fr\n* OPUS readme: bcl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.527"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bcl-fr\n\n\n* source languages: bcl\n* target languages: fr\n* OPUS readme: bcl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.527"
] |
translation | transformers |
### opus-mt-bcl-sv
* source languages: bcl
* target languages: sv
* OPUS readme: [bcl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.sv | 38.0 | 0.565 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bcl-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bcl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bcl-sv
* source languages: bcl
* target languages: sv
* OPUS readme: bcl-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.0, chr-F: 0.565
| [
"### opus-mt-bcl-sv\n\n\n* source languages: bcl\n* target languages: sv\n* OPUS readme: bcl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.565"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bcl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bcl-sv\n\n\n* source languages: bcl\n* target languages: sv\n* OPUS readme: bcl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.565"
] |
translation | transformers |
### bel-spa
* source group: Belarusian
* target group: Spanish
* OPUS readme: [bel-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md)
* model: transformer-align
* source language(s): bel bel_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel.spa | 11.8 | 0.272 |
### System Info:
- hf_name: bel-spa
- source_languages: bel
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'es']
- src_constituents: {'bel', 'bel_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt
- src_alpha3: bel
- tgt_alpha3: spa
- short_pair: be-es
- chrF2_score: 0.272
- bleu: 11.8
- brevity_penalty: 0.892
- ref_len: 1412.0
- src_name: Belarusian
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: be
- tgt_alpha2: es
- prefer_old: False
- long_pair: bel-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["be", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-be-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"be",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #be #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bel-spa
* source group: Belarusian
* target group: Spanish
* OPUS readme: bel-spa
* model: transformer-align
* source language(s): bel bel\_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.8, chr-F: 0.272
### System Info:
* hf\_name: bel-spa
* source\_languages: bel
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['be', 'es']
* src\_constituents: {'bel', 'bel\_Latn'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bel
* tgt\_alpha3: spa
* short\_pair: be-es
* chrF2\_score: 0.272
* bleu: 11.8
* brevity\_penalty: 0.892
* ref\_len: 1412.0
* src\_name: Belarusian
* tgt\_name: Spanish
* train\_date: 2020-06-16
* src\_alpha2: be
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: bel-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bel-spa\n\n\n* source group: Belarusian\n* target group: Spanish\n* OPUS readme: bel-spa\n* model: transformer-align\n* source language(s): bel bel\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.272",
"### System Info:\n\n\n* hf\\_name: bel-spa\n* source\\_languages: bel\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'es']\n* src\\_constituents: {'bel', 'bel\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bel\n* tgt\\_alpha3: spa\n* short\\_pair: be-es\n* chrF2\\_score: 0.272\n* bleu: 11.8\n* brevity\\_penalty: 0.892\n* ref\\_len: 1412.0\n* src\\_name: Belarusian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: be\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: bel-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bel-spa\n\n\n* source group: Belarusian\n* target group: Spanish\n* OPUS readme: bel-spa\n* model: transformer-align\n* source language(s): bel bel\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.272",
"### System Info:\n\n\n* hf\\_name: bel-spa\n* source\\_languages: bel\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'es']\n* src\\_constituents: {'bel', 'bel\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bel\n* tgt\\_alpha3: spa\n* short\\_pair: be-es\n* chrF2\\_score: 0.272\n* bleu: 11.8\n* brevity\\_penalty: 0.892\n* ref\\_len: 1412.0\n* src\\_name: Belarusian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: be\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: bel-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-bem-en
* source languages: bem
* target languages: en
* OPUS readme: [bem-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.en | 33.4 | 0.491 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bem-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bem #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bem-en
* source languages: bem
* target languages: en
* OPUS readme: bem-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.4, chr-F: 0.491
| [
"### opus-mt-bem-en\n\n\n* source languages: bem\n* target languages: en\n* OPUS readme: bem-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.4, chr-F: 0.491"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bem #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bem-en\n\n\n* source languages: bem\n* target languages: en\n* OPUS readme: bem-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.4, chr-F: 0.491"
] |
translation | transformers |
### opus-mt-bem-es
* source languages: bem
* target languages: es
* OPUS readme: [bem-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.es | 22.8 | 0.403 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bem-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bem #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bem-es
* source languages: bem
* target languages: es
* OPUS readme: bem-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.8, chr-F: 0.403
| [
"### opus-mt-bem-es\n\n\n* source languages: bem\n* target languages: es\n* OPUS readme: bem-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.403"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bem #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bem-es\n\n\n* source languages: bem\n* target languages: es\n* OPUS readme: bem-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.403"
] |
translation | transformers |
### opus-mt-bem-fi
* source languages: bem
* target languages: fi
* OPUS readme: [bem-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fi | 22.8 | 0.439 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bem-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bem #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bem-fi
* source languages: bem
* target languages: fi
* OPUS readme: bem-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.8, chr-F: 0.439
| [
"### opus-mt-bem-fi\n\n\n* source languages: bem\n* target languages: fi\n* OPUS readme: bem-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.439"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bem #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bem-fi\n\n\n* source languages: bem\n* target languages: fi\n* OPUS readme: bem-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.439"
] |
translation | transformers |
### opus-mt-bem-fr
* source languages: bem
* target languages: fr
* OPUS readme: [bem-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fr | 25.0 | 0.417 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bem-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bem #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bem-fr
* source languages: bem
* target languages: fr
* OPUS readme: bem-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.0, chr-F: 0.417
| [
"### opus-mt-bem-fr\n\n\n* source languages: bem\n* target languages: fr\n* OPUS readme: bem-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.417"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bem #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bem-fr\n\n\n* source languages: bem\n* target languages: fr\n* OPUS readme: bem-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.417"
] |
translation | transformers |
### opus-mt-bem-sv
* source languages: bem
* target languages: sv
* OPUS readme: [bem-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.sv | 25.6 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bem-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bem",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bem #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bem-sv
* source languages: bem
* target languages: sv
* OPUS readme: bem-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.6, chr-F: 0.434
| [
"### opus-mt-bem-sv\n\n\n* source languages: bem\n* target languages: sv\n* OPUS readme: bem-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.434"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bem #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bem-sv\n\n\n* source languages: bem\n* target languages: sv\n* OPUS readme: bem-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.434"
] |
translation | transformers |
### opus-mt-ber-en
* source languages: ber
* target languages: en
* OPUS readme: [ber-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.en | 37.3 | 0.566 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ber-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ber",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ber #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ber-en
* source languages: ber
* target languages: en
* OPUS readme: ber-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.3, chr-F: 0.566
| [
"### opus-mt-ber-en\n\n\n* source languages: ber\n* target languages: en\n* OPUS readme: ber-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.566"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ber #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ber-en\n\n\n* source languages: ber\n* target languages: en\n* OPUS readme: ber-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.566"
] |
translation | transformers |
### opus-mt-ber-es
* source languages: ber
* target languages: es
* OPUS readme: [ber-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.es | 33.8 | 0.487 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ber-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ber",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ber #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ber-es
* source languages: ber
* target languages: es
* OPUS readme: ber-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.8, chr-F: 0.487
| [
"### opus-mt-ber-es\n\n\n* source languages: ber\n* target languages: es\n* OPUS readme: ber-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.487"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ber #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ber-es\n\n\n* source languages: ber\n* target languages: es\n* OPUS readme: ber-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.487"
] |
translation | transformers |
### opus-mt-ber-fr
* source languages: ber
* target languages: fr
* OPUS readme: [ber-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.fr | 60.2 | 0.754 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ber-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ber",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ber #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ber-fr
* source languages: ber
* target languages: fr
* OPUS readme: ber-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 60.2, chr-F: 0.754
| [
"### opus-mt-ber-fr\n\n\n* source languages: ber\n* target languages: fr\n* OPUS readme: ber-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.2, chr-F: 0.754"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ber #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ber-fr\n\n\n* source languages: ber\n* target languages: fr\n* OPUS readme: ber-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.2, chr-F: 0.754"
] |
translation | transformers |
### bul-deu
* source group: Bulgarian
* target group: German
* OPUS readme: [bul-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md)
* model: transformer
* source language(s): bul
* target language(s): deu
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.deu | 49.3 | 0.676 |
### System Info:
- hf_name: bul-deu
- source_languages: bul
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'de']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: deu
- short_pair: bg-de
- chrF2_score: 0.6759999999999999
- bleu: 49.3
- brevity_penalty: 1.0
- ref_len: 2218.0
- src_name: Bulgarian
- tgt_name: German
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: de
- prefer_old: False
- long_pair: bul-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"de"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-deu
* source group: Bulgarian
* target group: German
* OPUS readme: bul-deu
* model: transformer
* source language(s): bul
* target language(s): deu
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.3, chr-F: 0.676
### System Info:
* hf\_name: bul-deu
* source\_languages: bul
* target\_languages: deu
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'de']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'deu'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: deu
* short\_pair: bg-de
* chrF2\_score: 0.6759999999999999
* bleu: 49.3
* brevity\_penalty: 1.0
* ref\_len: 2218.0
* src\_name: Bulgarian
* tgt\_name: German
* train\_date: 2020-07-03
* src\_alpha2: bg
* tgt\_alpha2: de
* prefer\_old: False
* long\_pair: bul-deu
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-deu\n\n\n* source group: Bulgarian\n* target group: German\n* OPUS readme: bul-deu\n* model: transformer\n* source language(s): bul\n* target language(s): deu\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.3, chr-F: 0.676",
"### System Info:\n\n\n* hf\\_name: bul-deu\n* source\\_languages: bul\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'de']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: deu\n* short\\_pair: bg-de\n* chrF2\\_score: 0.6759999999999999\n* bleu: 49.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 2218.0\n* src\\_name: Bulgarian\n* tgt\\_name: German\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: bul-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-deu\n\n\n* source group: Bulgarian\n* target group: German\n* OPUS readme: bul-deu\n* model: transformer\n* source language(s): bul\n* target language(s): deu\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.3, chr-F: 0.676",
"### System Info:\n\n\n* hf\\_name: bul-deu\n* source\\_languages: bul\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'de']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: deu\n* short\\_pair: bg-de\n* chrF2\\_score: 0.6759999999999999\n* bleu: 49.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 2218.0\n* src\\_name: Bulgarian\n* tgt\\_name: German\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: bul-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-bg-en
* source languages: bg
* target languages: en
* OPUS readme: [bg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.bg.en | 59.4 | 0.727 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bg-en
* source languages: bg
* target languages: en
* OPUS readme: bg-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 59.4, chr-F: 0.727
| [
"### opus-mt-bg-en\n\n\n* source languages: bg\n* target languages: en\n* OPUS readme: bg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 59.4, chr-F: 0.727"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bg-en\n\n\n* source languages: bg\n* target languages: en\n* OPUS readme: bg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 59.4, chr-F: 0.727"
] |
translation | transformers |
### bul-epo
* source group: Bulgarian
* target group: Esperanto
* OPUS readme: [bul-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md)
* model: transformer-align
* source language(s): bul
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.epo | 24.5 | 0.438 |
### System Info:
- hf_name: bul-epo
- source_languages: bul
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'eo']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt
- src_alpha3: bul
- tgt_alpha3: epo
- short_pair: bg-eo
- chrF2_score: 0.43799999999999994
- bleu: 24.5
- brevity_penalty: 0.9670000000000001
- ref_len: 4043.0
- src_name: Bulgarian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: bg
- tgt_alpha2: eo
- prefer_old: False
- long_pair: bul-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-epo
* source group: Bulgarian
* target group: Esperanto
* OPUS readme: bul-epo
* model: transformer-align
* source language(s): bul
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.5, chr-F: 0.438
### System Info:
* hf\_name: bul-epo
* source\_languages: bul
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'eo']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: epo
* short\_pair: bg-eo
* chrF2\_score: 0.43799999999999994
* bleu: 24.5
* brevity\_penalty: 0.9670000000000001
* ref\_len: 4043.0
* src\_name: Bulgarian
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: bg
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: bul-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-epo\n\n\n* source group: Bulgarian\n* target group: Esperanto\n* OPUS readme: bul-epo\n* model: transformer-align\n* source language(s): bul\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.438",
"### System Info:\n\n\n* hf\\_name: bul-epo\n* source\\_languages: bul\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'eo']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: epo\n* short\\_pair: bg-eo\n* chrF2\\_score: 0.43799999999999994\n* bleu: 24.5\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 4043.0\n* src\\_name: Bulgarian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: bg\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: bul-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-epo\n\n\n* source group: Bulgarian\n* target group: Esperanto\n* OPUS readme: bul-epo\n* model: transformer-align\n* source language(s): bul\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.438",
"### System Info:\n\n\n* hf\\_name: bul-epo\n* source\\_languages: bul\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'eo']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: epo\n* short\\_pair: bg-eo\n* chrF2\\_score: 0.43799999999999994\n* bleu: 24.5\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 4043.0\n* src\\_name: Bulgarian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: bg\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: bul-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### bul-spa
* source group: Bulgarian
* target group: Spanish
* OPUS readme: [bul-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md)
* model: transformer
* source language(s): bul
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.spa | 49.1 | 0.661 |
### System Info:
- hf_name: bul-spa
- source_languages: bul
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'es']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: spa
- short_pair: bg-es
- chrF2_score: 0.6609999999999999
- bleu: 49.1
- brevity_penalty: 0.992
- ref_len: 1783.0
- src_name: Bulgarian
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: es
- prefer_old: False
- long_pair: bul-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-spa
* source group: Bulgarian
* target group: Spanish
* OPUS readme: bul-spa
* model: transformer
* source language(s): bul
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.1, chr-F: 0.661
### System Info:
* hf\_name: bul-spa
* source\_languages: bul
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'es']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: spa
* short\_pair: bg-es
* chrF2\_score: 0.6609999999999999
* bleu: 49.1
* brevity\_penalty: 0.992
* ref\_len: 1783.0
* src\_name: Bulgarian
* tgt\_name: Spanish
* train\_date: 2020-07-03
* src\_alpha2: bg
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: bul-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-spa\n\n\n* source group: Bulgarian\n* target group: Spanish\n* OPUS readme: bul-spa\n* model: transformer\n* source language(s): bul\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.1, chr-F: 0.661",
"### System Info:\n\n\n* hf\\_name: bul-spa\n* source\\_languages: bul\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'es']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: spa\n* short\\_pair: bg-es\n* chrF2\\_score: 0.6609999999999999\n* bleu: 49.1\n* brevity\\_penalty: 0.992\n* ref\\_len: 1783.0\n* src\\_name: Bulgarian\n* tgt\\_name: Spanish\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: bul-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-spa\n\n\n* source group: Bulgarian\n* target group: Spanish\n* OPUS readme: bul-spa\n* model: transformer\n* source language(s): bul\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.1, chr-F: 0.661",
"### System Info:\n\n\n* hf\\_name: bul-spa\n* source\\_languages: bul\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'es']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: spa\n* short\\_pair: bg-es\n* chrF2\\_score: 0.6609999999999999\n* bleu: 49.1\n* brevity\\_penalty: 0.992\n* ref\\_len: 1783.0\n* src\\_name: Bulgarian\n* tgt\\_name: Spanish\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: bul-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-bg-fi
* source languages: bg
* target languages: fi
* OPUS readme: [bg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.fi | 23.7 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bg-fi
* source languages: bg
* target languages: fi
* OPUS readme: bg-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.7, chr-F: 0.505
| [
"### opus-mt-bg-fi\n\n\n* source languages: bg\n* target languages: fi\n* OPUS readme: bg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.505"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bg-fi\n\n\n* source languages: bg\n* target languages: fi\n* OPUS readme: bg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.505"
] |
translation | transformers |
### bul-fra
* source group: Bulgarian
* target group: French
* OPUS readme: [bul-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md)
* model: transformer
* source language(s): bul
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.fra | 53.7 | 0.693 |
### System Info:
- hf_name: bul-fra
- source_languages: bul
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'fr']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: fra
- short_pair: bg-fr
- chrF2_score: 0.693
- bleu: 53.7
- brevity_penalty: 0.977
- ref_len: 3669.0
- src_name: Bulgarian
- tgt_name: French
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: fr
- prefer_old: False
- long_pair: bul-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"fr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-fra
* source group: Bulgarian
* target group: French
* OPUS readme: bul-fra
* model: transformer
* source language(s): bul
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 53.7, chr-F: 0.693
### System Info:
* hf\_name: bul-fra
* source\_languages: bul
* target\_languages: fra
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'fr']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'fra'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: fra
* short\_pair: bg-fr
* chrF2\_score: 0.693
* bleu: 53.7
* brevity\_penalty: 0.977
* ref\_len: 3669.0
* src\_name: Bulgarian
* tgt\_name: French
* train\_date: 2020-07-03
* src\_alpha2: bg
* tgt\_alpha2: fr
* prefer\_old: False
* long\_pair: bul-fra
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-fra\n\n\n* source group: Bulgarian\n* target group: French\n* OPUS readme: bul-fra\n* model: transformer\n* source language(s): bul\n* target language(s): fra\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.7, chr-F: 0.693",
"### System Info:\n\n\n* hf\\_name: bul-fra\n* source\\_languages: bul\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'fr']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: fra\n* short\\_pair: bg-fr\n* chrF2\\_score: 0.693\n* bleu: 53.7\n* brevity\\_penalty: 0.977\n* ref\\_len: 3669.0\n* src\\_name: Bulgarian\n* tgt\\_name: French\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: bul-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-fra\n\n\n* source group: Bulgarian\n* target group: French\n* OPUS readme: bul-fra\n* model: transformer\n* source language(s): bul\n* target language(s): fra\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.7, chr-F: 0.693",
"### System Info:\n\n\n* hf\\_name: bul-fra\n* source\\_languages: bul\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'fr']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: fra\n* short\\_pair: bg-fr\n* chrF2\\_score: 0.693\n* bleu: 53.7\n* brevity\\_penalty: 0.977\n* ref\\_len: 3669.0\n* src\\_name: Bulgarian\n* tgt\\_name: French\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: bul-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### bul-ita
* source group: Bulgarian
* target group: Italian
* OPUS readme: [bul-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md)
* model: transformer
* source language(s): bul
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.ita | 43.1 | 0.653 |
### System Info:
- hf_name: bul-ita
- source_languages: bul
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'it']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: ita
- short_pair: bg-it
- chrF2_score: 0.653
- bleu: 43.1
- brevity_penalty: 0.987
- ref_len: 16951.0
- src_name: Bulgarian
- tgt_name: Italian
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: it
- prefer_old: False
- long_pair: bul-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-ita
* source group: Bulgarian
* target group: Italian
* OPUS readme: bul-ita
* model: transformer
* source language(s): bul
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.1, chr-F: 0.653
### System Info:
* hf\_name: bul-ita
* source\_languages: bul
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'it']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'ita'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: ita
* short\_pair: bg-it
* chrF2\_score: 0.653
* bleu: 43.1
* brevity\_penalty: 0.987
* ref\_len: 16951.0
* src\_name: Bulgarian
* tgt\_name: Italian
* train\_date: 2020-07-03
* src\_alpha2: bg
* tgt\_alpha2: it
* prefer\_old: False
* long\_pair: bul-ita
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-ita\n\n\n* source group: Bulgarian\n* target group: Italian\n* OPUS readme: bul-ita\n* model: transformer\n* source language(s): bul\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.1, chr-F: 0.653",
"### System Info:\n\n\n* hf\\_name: bul-ita\n* source\\_languages: bul\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'it']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: ita\n* short\\_pair: bg-it\n* chrF2\\_score: 0.653\n* bleu: 43.1\n* brevity\\_penalty: 0.987\n* ref\\_len: 16951.0\n* src\\_name: Bulgarian\n* tgt\\_name: Italian\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: bul-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-ita\n\n\n* source group: Bulgarian\n* target group: Italian\n* OPUS readme: bul-ita\n* model: transformer\n* source language(s): bul\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.1, chr-F: 0.653",
"### System Info:\n\n\n* hf\\_name: bul-ita\n* source\\_languages: bul\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'it']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: ita\n* short\\_pair: bg-it\n* chrF2\\_score: 0.653\n* bleu: 43.1\n* brevity\\_penalty: 0.987\n* ref\\_len: 16951.0\n* src\\_name: Bulgarian\n* tgt\\_name: Italian\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: bul-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### bul-rus
* source group: Bulgarian
* target group: Russian
* OPUS readme: [bul-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md)
* model: transformer
* source language(s): bul bul_Latn
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.rus | 48.5 | 0.691 |
### System Info:
- hf_name: bul-rus
- source_languages: bul
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'ru']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: rus
- short_pair: bg-ru
- chrF2_score: 0.691
- bleu: 48.5
- brevity_penalty: 1.0
- ref_len: 7870.0
- src_name: Bulgarian
- tgt_name: Russian
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: ru
- prefer_old: False
- long_pair: bul-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-rus
* source group: Bulgarian
* target group: Russian
* OPUS readme: bul-rus
* model: transformer
* source language(s): bul bul\_Latn
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 48.5, chr-F: 0.691
### System Info:
* hf\_name: bul-rus
* source\_languages: bul
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'ru']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'rus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: rus
* short\_pair: bg-ru
* chrF2\_score: 0.691
* bleu: 48.5
* brevity\_penalty: 1.0
* ref\_len: 7870.0
* src\_name: Bulgarian
* tgt\_name: Russian
* train\_date: 2020-07-03
* src\_alpha2: bg
* tgt\_alpha2: ru
* prefer\_old: False
* long\_pair: bul-rus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-rus\n\n\n* source group: Bulgarian\n* target group: Russian\n* OPUS readme: bul-rus\n* model: transformer\n* source language(s): bul bul\\_Latn\n* target language(s): rus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.5, chr-F: 0.691",
"### System Info:\n\n\n* hf\\_name: bul-rus\n* source\\_languages: bul\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'ru']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: rus\n* short\\_pair: bg-ru\n* chrF2\\_score: 0.691\n* bleu: 48.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 7870.0\n* src\\_name: Bulgarian\n* tgt\\_name: Russian\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: bul-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-rus\n\n\n* source group: Bulgarian\n* target group: Russian\n* OPUS readme: bul-rus\n* model: transformer\n* source language(s): bul bul\\_Latn\n* target language(s): rus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.5, chr-F: 0.691",
"### System Info:\n\n\n* hf\\_name: bul-rus\n* source\\_languages: bul\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'ru']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: rus\n* short\\_pair: bg-ru\n* chrF2\\_score: 0.691\n* bleu: 48.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 7870.0\n* src\\_name: Bulgarian\n* tgt\\_name: Russian\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: bul-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-bg-sv
* source languages: bg
* target languages: sv
* OPUS readme: [bg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.sv | 29.1 | 0.494 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bg-sv
* source languages: bg
* target languages: sv
* OPUS readme: bg-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.1, chr-F: 0.494
| [
"### opus-mt-bg-sv\n\n\n* source languages: bg\n* target languages: sv\n* OPUS readme: bg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.494"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bg-sv\n\n\n* source languages: bg\n* target languages: sv\n* OPUS readme: bg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.494"
] |
translation | transformers |
### bul-tur
* source group: Bulgarian
* target group: Turkish
* OPUS readme: [bul-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md)
* model: transformer
* source language(s): bul bul_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.tur | 40.9 | 0.687 |
### System Info:
- hf_name: bul-tur
- source_languages: bul
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'tr']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: tur
- short_pair: bg-tr
- chrF2_score: 0.687
- bleu: 40.9
- brevity_penalty: 0.946
- ref_len: 4948.0
- src_name: Bulgarian
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: tr
- prefer_old: False
- long_pair: bul-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "tr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-tr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"tr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-tur
* source group: Bulgarian
* target group: Turkish
* OPUS readme: bul-tur
* model: transformer
* source language(s): bul bul\_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.9, chr-F: 0.687
### System Info:
* hf\_name: bul-tur
* source\_languages: bul
* target\_languages: tur
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'tr']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'tur'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: tur
* short\_pair: bg-tr
* chrF2\_score: 0.687
* bleu: 40.9
* brevity\_penalty: 0.946
* ref\_len: 4948.0
* src\_name: Bulgarian
* tgt\_name: Turkish
* train\_date: 2020-07-03
* src\_alpha2: bg
* tgt\_alpha2: tr
* prefer\_old: False
* long\_pair: bul-tur
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-tur\n\n\n* source group: Bulgarian\n* target group: Turkish\n* OPUS readme: bul-tur\n* model: transformer\n* source language(s): bul bul\\_Latn\n* target language(s): tur\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.9, chr-F: 0.687",
"### System Info:\n\n\n* hf\\_name: bul-tur\n* source\\_languages: bul\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'tr']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: tur\n* short\\_pair: bg-tr\n* chrF2\\_score: 0.687\n* bleu: 40.9\n* brevity\\_penalty: 0.946\n* ref\\_len: 4948.0\n* src\\_name: Bulgarian\n* tgt\\_name: Turkish\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: bul-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-tur\n\n\n* source group: Bulgarian\n* target group: Turkish\n* OPUS readme: bul-tur\n* model: transformer\n* source language(s): bul bul\\_Latn\n* target language(s): tur\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.9, chr-F: 0.687",
"### System Info:\n\n\n* hf\\_name: bul-tur\n* source\\_languages: bul\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'tr']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: tur\n* short\\_pair: bg-tr\n* chrF2\\_score: 0.687\n* bleu: 40.9\n* brevity\\_penalty: 0.946\n* ref\\_len: 4948.0\n* src\\_name: Bulgarian\n* tgt\\_name: Turkish\n* train\\_date: 2020-07-03\n* src\\_alpha2: bg\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: bul-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### bul-ukr
* source group: Bulgarian
* target group: Ukrainian
* OPUS readme: [bul-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md)
* model: transformer-align
* source language(s): bul
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.ukr | 49.2 | 0.683 |
### System Info:
- hf_name: bul-ukr
- source_languages: bul
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'uk']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt
- src_alpha3: bul
- tgt_alpha3: ukr
- short_pair: bg-uk
- chrF2_score: 0.6829999999999999
- bleu: 49.2
- brevity_penalty: 0.983
- ref_len: 4932.0
- src_name: Bulgarian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: bg
- tgt_alpha2: uk
- prefer_old: False
- long_pair: bul-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bg", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bg-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bg",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bg",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bg #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bul-ukr
* source group: Bulgarian
* target group: Ukrainian
* OPUS readme: bul-ukr
* model: transformer-align
* source language(s): bul
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.2, chr-F: 0.683
### System Info:
* hf\_name: bul-ukr
* source\_languages: bul
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bg', 'uk']
* src\_constituents: {'bul', 'bul\_Latn'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bul
* tgt\_alpha3: ukr
* short\_pair: bg-uk
* chrF2\_score: 0.6829999999999999
* bleu: 49.2
* brevity\_penalty: 0.983
* ref\_len: 4932.0
* src\_name: Bulgarian
* tgt\_name: Ukrainian
* train\_date: 2020-06-17
* src\_alpha2: bg
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: bul-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bul-ukr\n\n\n* source group: Bulgarian\n* target group: Ukrainian\n* OPUS readme: bul-ukr\n* model: transformer-align\n* source language(s): bul\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.683",
"### System Info:\n\n\n* hf\\_name: bul-ukr\n* source\\_languages: bul\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'uk']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: ukr\n* short\\_pair: bg-uk\n* chrF2\\_score: 0.6829999999999999\n* bleu: 49.2\n* brevity\\_penalty: 0.983\n* ref\\_len: 4932.0\n* src\\_name: Bulgarian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: bg\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: bul-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bg #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bul-ukr\n\n\n* source group: Bulgarian\n* target group: Ukrainian\n* OPUS readme: bul-ukr\n* model: transformer-align\n* source language(s): bul\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.683",
"### System Info:\n\n\n* hf\\_name: bul-ukr\n* source\\_languages: bul\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bg', 'uk']\n* src\\_constituents: {'bul', 'bul\\_Latn'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bul\n* tgt\\_alpha3: ukr\n* short\\_pair: bg-uk\n* chrF2\\_score: 0.6829999999999999\n* bleu: 49.2\n* brevity\\_penalty: 0.983\n* ref\\_len: 4932.0\n* src\\_name: Bulgarian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: bg\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: bul-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-bi-en
* source languages: bi
* target languages: en
* OPUS readme: [bi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.en | 30.3 | 0.458 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bi-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bi-en
* source languages: bi
* target languages: en
* OPUS readme: bi-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.3, chr-F: 0.458
| [
"### opus-mt-bi-en\n\n\n* source languages: bi\n* target languages: en\n* OPUS readme: bi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.458"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bi-en\n\n\n* source languages: bi\n* target languages: en\n* OPUS readme: bi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.458"
] |
translation | transformers |
### opus-mt-bi-es
* source languages: bi
* target languages: es
* OPUS readme: [bi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.es | 21.1 | 0.388 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bi-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bi",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bi-es
* source languages: bi
* target languages: es
* OPUS readme: bi-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.1, chr-F: 0.388
| [
"### opus-mt-bi-es\n\n\n* source languages: bi\n* target languages: es\n* OPUS readme: bi-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.388"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bi-es\n\n\n* source languages: bi\n* target languages: es\n* OPUS readme: bi-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.388"
] |
translation | transformers |
### opus-mt-bi-fr
* source languages: bi
* target languages: fr
* OPUS readme: [bi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.fr | 21.5 | 0.382 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bi-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bi",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bi-fr
* source languages: bi
* target languages: fr
* OPUS readme: bi-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.5, chr-F: 0.382
| [
"### opus-mt-bi-fr\n\n\n* source languages: bi\n* target languages: fr\n* OPUS readme: bi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.382"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bi-fr\n\n\n* source languages: bi\n* target languages: fr\n* OPUS readme: bi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.382"
] |
translation | transformers |
### opus-mt-bi-sv
* source languages: bi
* target languages: sv
* OPUS readme: [bi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.sv | 22.7 | 0.403 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bi-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bi",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bi-sv
* source languages: bi
* target languages: sv
* OPUS readme: bi-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.7, chr-F: 0.403
| [
"### opus-mt-bi-sv\n\n\n* source languages: bi\n* target languages: sv\n* OPUS readme: bi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.403"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bi-sv\n\n\n* source languages: bi\n* target languages: sv\n* OPUS readme: bi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.403"
] |
translation | transformers |
### ben-eng
* source group: Bengali
* target group: English
* OPUS readme: [ben-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md)
* model: transformer-align
* source language(s): ben
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ben.eng | 49.7 | 0.641 |
### System Info:
- hf_name: ben-eng
- source_languages: ben
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'en']
- src_constituents: {'ben'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt
- src_alpha3: ben
- tgt_alpha3: eng
- short_pair: bn-en
- chrF2_score: 0.6409999999999999
- bleu: 49.7
- brevity_penalty: 0.976
- ref_len: 13978.0
- src_name: Bengali
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: bn
- tgt_alpha2: en
- prefer_old: False
- long_pair: ben-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bn", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bn-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bn",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ben-eng
* source group: Bengali
* target group: English
* OPUS readme: ben-eng
* model: transformer-align
* source language(s): ben
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.7, chr-F: 0.641
### System Info:
* hf\_name: ben-eng
* source\_languages: ben
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bn', 'en']
* src\_constituents: {'ben'}
* tgt\_constituents: {'eng'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ben
* tgt\_alpha3: eng
* short\_pair: bn-en
* chrF2\_score: 0.6409999999999999
* bleu: 49.7
* brevity\_penalty: 0.976
* ref\_len: 13978.0
* src\_name: Bengali
* tgt\_name: English
* train\_date: 2020-06-17
* src\_alpha2: bn
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: ben-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ben-eng\n\n\n* source group: Bengali\n* target group: English\n* OPUS readme: ben-eng\n* model: transformer-align\n* source language(s): ben\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.7, chr-F: 0.641",
"### System Info:\n\n\n* hf\\_name: ben-eng\n* source\\_languages: ben\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'en']\n* src\\_constituents: {'ben'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ben\n* tgt\\_alpha3: eng\n* short\\_pair: bn-en\n* chrF2\\_score: 0.6409999999999999\n* bleu: 49.7\n* brevity\\_penalty: 0.976\n* ref\\_len: 13978.0\n* src\\_name: Bengali\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: bn\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ben-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ben-eng\n\n\n* source group: Bengali\n* target group: English\n* OPUS readme: ben-eng\n* model: transformer-align\n* source language(s): ben\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.7, chr-F: 0.641",
"### System Info:\n\n\n* hf\\_name: ben-eng\n* source\\_languages: ben\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'en']\n* src\\_constituents: {'ben'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ben\n* tgt\\_alpha3: eng\n* short\\_pair: bn-en\n* chrF2\\_score: 0.6409999999999999\n* bleu: 49.7\n* brevity\\_penalty: 0.976\n* ref\\_len: 13978.0\n* src\\_name: Bengali\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: bn\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ben-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### bnt-eng
* source group: Bantu languages
* target group: English
* OPUS readme: [bnt-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md)
* model: transformer
* source language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kin-eng.kin.eng | 31.7 | 0.481 |
| Tatoeba-test.lin-eng.lin.eng | 8.3 | 0.271 |
| Tatoeba-test.lug-eng.lug.eng | 5.3 | 0.128 |
| Tatoeba-test.multi.eng | 23.1 | 0.394 |
| Tatoeba-test.nya-eng.nya.eng | 38.3 | 0.527 |
| Tatoeba-test.run-eng.run.eng | 26.6 | 0.431 |
| Tatoeba-test.sna-eng.sna.eng | 27.5 | 0.440 |
| Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.195 |
| Tatoeba-test.toi-eng.toi.eng | 16.2 | 0.342 |
| Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 |
| Tatoeba-test.umb-eng.umb.eng | 8.4 | 0.231 |
| Tatoeba-test.xho-eng.xho.eng | 37.2 | 0.554 |
| Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.576 |
### System Info:
- hf_name: bnt-eng
- source_languages: bnt
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en']
- src_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt
- src_alpha3: bnt
- tgt_alpha3: eng
- short_pair: bnt-en
- chrF2_score: 0.39399999999999996
- bleu: 23.1
- brevity_penalty: 1.0
- ref_len: 14565.0
- src_name: Bantu languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: bnt
- tgt_alpha2: en
- prefer_old: False
- long_pair: bnt-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bnt-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"zu",
"rw",
"lg",
"ts",
"ln",
"ny",
"xh",
"rn",
"bnt",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sn",
"zu",
"rw",
"lg",
"ts",
"ln",
"ny",
"xh",
"rn",
"bnt",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sn #zu #rw #lg #ts #ln #ny #xh #rn #bnt #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### bnt-eng
* source group: Bantu languages
* target group: English
* OPUS readme: bnt-eng
* model: transformer
* source language(s): kin lin lug nya run sna swh toi\_Latn tso umb xho zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.7, chr-F: 0.481
testset: URL, BLEU: 8.3, chr-F: 0.271
testset: URL, BLEU: 5.3, chr-F: 0.128
testset: URL, BLEU: 23.1, chr-F: 0.394
testset: URL, BLEU: 38.3, chr-F: 0.527
testset: URL, BLEU: 26.6, chr-F: 0.431
testset: URL, BLEU: 27.5, chr-F: 0.440
testset: URL, BLEU: 4.6, chr-F: 0.195
testset: URL, BLEU: 16.2, chr-F: 0.342
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 8.4, chr-F: 0.231
testset: URL, BLEU: 37.2, chr-F: 0.554
testset: URL, BLEU: 40.9, chr-F: 0.576
### System Info:
* hf\_name: bnt-eng
* source\_languages: bnt
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en']
* src\_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi\_Latn', 'umb'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: bnt
* tgt\_alpha3: eng
* short\_pair: bnt-en
* chrF2\_score: 0.39399999999999996
* bleu: 23.1
* brevity\_penalty: 1.0
* ref\_len: 14565.0
* src\_name: Bantu languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: bnt
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: bnt-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### bnt-eng\n\n\n* source group: Bantu languages\n* target group: English\n* OPUS readme: bnt-eng\n* model: transformer\n* source language(s): kin lin lug nya run sna swh toi\\_Latn tso umb xho zul\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.481\ntestset: URL, BLEU: 8.3, chr-F: 0.271\ntestset: URL, BLEU: 5.3, chr-F: 0.128\ntestset: URL, BLEU: 23.1, chr-F: 0.394\ntestset: URL, BLEU: 38.3, chr-F: 0.527\ntestset: URL, BLEU: 26.6, chr-F: 0.431\ntestset: URL, BLEU: 27.5, chr-F: 0.440\ntestset: URL, BLEU: 4.6, chr-F: 0.195\ntestset: URL, BLEU: 16.2, chr-F: 0.342\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 8.4, chr-F: 0.231\ntestset: URL, BLEU: 37.2, chr-F: 0.554\ntestset: URL, BLEU: 40.9, chr-F: 0.576",
"### System Info:\n\n\n* hf\\_name: bnt-eng\n* source\\_languages: bnt\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en']\n* src\\_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi\\_Latn', 'umb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bnt\n* tgt\\_alpha3: eng\n* short\\_pair: bnt-en\n* chrF2\\_score: 0.39399999999999996\n* bleu: 23.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 14565.0\n* src\\_name: Bantu languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: bnt\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: bnt-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #zu #rw #lg #ts #ln #ny #xh #rn #bnt #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### bnt-eng\n\n\n* source group: Bantu languages\n* target group: English\n* OPUS readme: bnt-eng\n* model: transformer\n* source language(s): kin lin lug nya run sna swh toi\\_Latn tso umb xho zul\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.481\ntestset: URL, BLEU: 8.3, chr-F: 0.271\ntestset: URL, BLEU: 5.3, chr-F: 0.128\ntestset: URL, BLEU: 23.1, chr-F: 0.394\ntestset: URL, BLEU: 38.3, chr-F: 0.527\ntestset: URL, BLEU: 26.6, chr-F: 0.431\ntestset: URL, BLEU: 27.5, chr-F: 0.440\ntestset: URL, BLEU: 4.6, chr-F: 0.195\ntestset: URL, BLEU: 16.2, chr-F: 0.342\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 8.4, chr-F: 0.231\ntestset: URL, BLEU: 37.2, chr-F: 0.554\ntestset: URL, BLEU: 40.9, chr-F: 0.576",
"### System Info:\n\n\n* hf\\_name: bnt-eng\n* source\\_languages: bnt\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en']\n* src\\_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi\\_Latn', 'umb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: bnt\n* tgt\\_alpha3: eng\n* short\\_pair: bnt-en\n* chrF2\\_score: 0.39399999999999996\n* bleu: 23.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 14565.0\n* src\\_name: Bantu languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: bnt\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: bnt-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-bzs-en
* source languages: bzs
* target languages: en
* OPUS readme: [bzs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.en | 44.5 | 0.605 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bzs-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bzs",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bzs-en
* source languages: bzs
* target languages: en
* OPUS readme: bzs-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.5, chr-F: 0.605
| [
"### opus-mt-bzs-en\n\n\n* source languages: bzs\n* target languages: en\n* OPUS readme: bzs-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.5, chr-F: 0.605"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bzs-en\n\n\n* source languages: bzs\n* target languages: en\n* OPUS readme: bzs-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.5, chr-F: 0.605"
] |
translation | transformers |
### opus-mt-bzs-es
* source languages: bzs
* target languages: es
* OPUS readme: [bzs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.es | 28.1 | 0.464 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bzs-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bzs",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bzs-es
* source languages: bzs
* target languages: es
* OPUS readme: bzs-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.1, chr-F: 0.464
| [
"### opus-mt-bzs-es\n\n\n* source languages: bzs\n* target languages: es\n* OPUS readme: bzs-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.464"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bzs-es\n\n\n* source languages: bzs\n* target languages: es\n* OPUS readme: bzs-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.464"
] |
translation | transformers |
### opus-mt-bzs-fi
* source languages: bzs
* target languages: fi
* OPUS readme: [bzs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.fi | 24.7 | 0.464 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bzs-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bzs",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bzs-fi
* source languages: bzs
* target languages: fi
* OPUS readme: bzs-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.7, chr-F: 0.464
| [
"### opus-mt-bzs-fi\n\n\n* source languages: bzs\n* target languages: fi\n* OPUS readme: bzs-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.464"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bzs-fi\n\n\n* source languages: bzs\n* target languages: fi\n* OPUS readme: bzs-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.464"
] |
translation | transformers |
### opus-mt-bzs-fr
* source languages: bzs
* target languages: fr
* OPUS readme: [bzs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.fr | 30.0 | 0.479 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bzs-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bzs",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bzs-fr
* source languages: bzs
* target languages: fr
* OPUS readme: bzs-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.0, chr-F: 0.479
| [
"### opus-mt-bzs-fr\n\n\n* source languages: bzs\n* target languages: fr\n* OPUS readme: bzs-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.479"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bzs-fr\n\n\n* source languages: bzs\n* target languages: fr\n* OPUS readme: bzs-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.479"
] |
translation | transformers |
### opus-mt-bzs-sv
* source languages: bzs
* target languages: sv
* OPUS readme: [bzs-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.sv | 30.7 | 0.489 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-bzs-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bzs",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-bzs-sv
* source languages: bzs
* target languages: sv
* OPUS readme: bzs-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.7, chr-F: 0.489
| [
"### opus-mt-bzs-sv\n\n\n* source languages: bzs\n* target languages: sv\n* OPUS readme: bzs-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.489"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bzs #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-bzs-sv\n\n\n* source languages: bzs\n* target languages: sv\n* OPUS readme: bzs-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.489"
] |
translation | transformers |
### cat-deu
* source group: Catalan
* target group: German
* OPUS readme: [cat-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.deu | 39.5 | 0.593 |
### System Info:
- hf_name: cat-deu
- source_languages: cat
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'de']
- src_constituents: {'cat'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: deu
- short_pair: ca-de
- chrF2_score: 0.593
- bleu: 39.5
- brevity_penalty: 1.0
- ref_len: 5643.0
- src_name: Catalan
- tgt_name: German
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: de
- prefer_old: False
- long_pair: cat-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"de"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### cat-deu
* source group: Catalan
* target group: German
* OPUS readme: cat-deu
* model: transformer-align
* source language(s): cat
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.5, chr-F: 0.593
### System Info:
* hf\_name: cat-deu
* source\_languages: cat
* target\_languages: deu
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'de']
* src\_constituents: {'cat'}
* tgt\_constituents: {'deu'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: cat
* tgt\_alpha3: deu
* short\_pair: ca-de
* chrF2\_score: 0.593
* bleu: 39.5
* brevity\_penalty: 1.0
* ref\_len: 5643.0
* src\_name: Catalan
* tgt\_name: German
* train\_date: 2020-06-16
* src\_alpha2: ca
* tgt\_alpha2: de
* prefer\_old: False
* long\_pair: cat-deu
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### cat-deu\n\n\n* source group: Catalan\n* target group: German\n* OPUS readme: cat-deu\n* model: transformer-align\n* source language(s): cat\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.5, chr-F: 0.593",
"### System Info:\n\n\n* hf\\_name: cat-deu\n* source\\_languages: cat\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'de']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: deu\n* short\\_pair: ca-de\n* chrF2\\_score: 0.593\n* bleu: 39.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 5643.0\n* src\\_name: Catalan\n* tgt\\_name: German\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: cat-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### cat-deu\n\n\n* source group: Catalan\n* target group: German\n* OPUS readme: cat-deu\n* model: transformer-align\n* source language(s): cat\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.5, chr-F: 0.593",
"### System Info:\n\n\n* hf\\_name: cat-deu\n* source\\_languages: cat\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'de']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: deu\n* short\\_pair: ca-de\n* chrF2\\_score: 0.593\n* bleu: 39.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 5643.0\n* src\\_name: Catalan\n* tgt\\_name: German\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: cat-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ca-en
* source languages: ca
* target languages: en
* OPUS readme: [ca-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ca.en | 51.4 | 0.678 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ca-en
* source languages: ca
* target languages: en
* OPUS readme: ca-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.4, chr-F: 0.678
| [
"### opus-mt-ca-en\n\n\n* source languages: ca\n* target languages: en\n* OPUS readme: ca-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.4, chr-F: 0.678"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ca-en\n\n\n* source languages: ca\n* target languages: en\n* OPUS readme: ca-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.4, chr-F: 0.678"
] |
translation | transformers |
### opus-mt-ca-es
* source languages: ca
* target languages: es
* OPUS readme: [ca-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ca.es | 74.9 | 0.863 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ca-es
* source languages: ca
* target languages: es
* OPUS readme: ca-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 74.9, chr-F: 0.863
| [
"### opus-mt-ca-es\n\n\n* source languages: ca\n* target languages: es\n* OPUS readme: ca-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 74.9, chr-F: 0.863"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ca-es\n\n\n* source languages: ca\n* target languages: es\n* OPUS readme: ca-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 74.9, chr-F: 0.863"
] |
translation | transformers |
### cat-fra
* source group: Catalan
* target group: French
* OPUS readme: [cat-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-fra/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.fra | 52.4 | 0.694 |
### System Info:
- hf_name: cat-fra
- source_languages: cat
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'fr']
- src_constituents: {'cat'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: fra
- short_pair: ca-fr
- chrF2_score: 0.6940000000000001
- bleu: 52.4
- brevity_penalty: 0.987
- ref_len: 5517.0
- src_name: Catalan
- tgt_name: French
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: fr
- prefer_old: False
- long_pair: cat-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"fr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### cat-fra
* source group: Catalan
* target group: French
* OPUS readme: cat-fra
* model: transformer-align
* source language(s): cat
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.4, chr-F: 0.694
### System Info:
* hf\_name: cat-fra
* source\_languages: cat
* target\_languages: fra
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'fr']
* src\_constituents: {'cat'}
* tgt\_constituents: {'fra'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: cat
* tgt\_alpha3: fra
* short\_pair: ca-fr
* chrF2\_score: 0.6940000000000001
* bleu: 52.4
* brevity\_penalty: 0.987
* ref\_len: 5517.0
* src\_name: Catalan
* tgt\_name: French
* train\_date: 2020-06-16
* src\_alpha2: ca
* tgt\_alpha2: fr
* prefer\_old: False
* long\_pair: cat-fra
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### cat-fra\n\n\n* source group: Catalan\n* target group: French\n* OPUS readme: cat-fra\n* model: transformer-align\n* source language(s): cat\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.4, chr-F: 0.694",
"### System Info:\n\n\n* hf\\_name: cat-fra\n* source\\_languages: cat\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'fr']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: fra\n* short\\_pair: ca-fr\n* chrF2\\_score: 0.6940000000000001\n* bleu: 52.4\n* brevity\\_penalty: 0.987\n* ref\\_len: 5517.0\n* src\\_name: Catalan\n* tgt\\_name: French\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: cat-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### cat-fra\n\n\n* source group: Catalan\n* target group: French\n* OPUS readme: cat-fra\n* model: transformer-align\n* source language(s): cat\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.4, chr-F: 0.694",
"### System Info:\n\n\n* hf\\_name: cat-fra\n* source\\_languages: cat\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'fr']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: fra\n* short\\_pair: ca-fr\n* chrF2\\_score: 0.6940000000000001\n* bleu: 52.4\n* brevity\\_penalty: 0.987\n* ref\\_len: 5517.0\n* src\\_name: Catalan\n* tgt\\_name: French\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: cat-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### cat-ita
* source group: Catalan
* target group: Italian
* OPUS readme: [cat-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ita/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.ita | 48.6 | 0.690 |
### System Info:
- hf_name: cat-ita
- source_languages: cat
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'it']
- src_constituents: {'cat'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: ita
- short_pair: ca-it
- chrF2_score: 0.69
- bleu: 48.6
- brevity_penalty: 0.985
- ref_len: 1995.0
- src_name: Catalan
- tgt_name: Italian
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: it
- prefer_old: False
- long_pair: cat-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### cat-ita
* source group: Catalan
* target group: Italian
* OPUS readme: cat-ita
* model: transformer-align
* source language(s): cat
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 48.6, chr-F: 0.690
### System Info:
* hf\_name: cat-ita
* source\_languages: cat
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'it']
* src\_constituents: {'cat'}
* tgt\_constituents: {'ita'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: cat
* tgt\_alpha3: ita
* short\_pair: ca-it
* chrF2\_score: 0.69
* bleu: 48.6
* brevity\_penalty: 0.985
* ref\_len: 1995.0
* src\_name: Catalan
* tgt\_name: Italian
* train\_date: 2020-06-16
* src\_alpha2: ca
* tgt\_alpha2: it
* prefer\_old: False
* long\_pair: cat-ita
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### cat-ita\n\n\n* source group: Catalan\n* target group: Italian\n* OPUS readme: cat-ita\n* model: transformer-align\n* source language(s): cat\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.6, chr-F: 0.690",
"### System Info:\n\n\n* hf\\_name: cat-ita\n* source\\_languages: cat\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'it']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: ita\n* short\\_pair: ca-it\n* chrF2\\_score: 0.69\n* bleu: 48.6\n* brevity\\_penalty: 0.985\n* ref\\_len: 1995.0\n* src\\_name: Catalan\n* tgt\\_name: Italian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: cat-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### cat-ita\n\n\n* source group: Catalan\n* target group: Italian\n* OPUS readme: cat-ita\n* model: transformer-align\n* source language(s): cat\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.6, chr-F: 0.690",
"### System Info:\n\n\n* hf\\_name: cat-ita\n* source\\_languages: cat\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'it']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: ita\n* short\\_pair: ca-it\n* chrF2\\_score: 0.69\n* bleu: 48.6\n* brevity\\_penalty: 0.985\n* ref\\_len: 1995.0\n* src\\_name: Catalan\n* tgt\\_name: Italian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: cat-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### cat-nld
* source group: Catalan
* target group: Dutch
* OPUS readme: [cat-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.nld | 45.1 | 0.632 |
### System Info:
- hf_name: cat-nld
- source_languages: cat
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'nl']
- src_constituents: {'cat'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: nld
- short_pair: ca-nl
- chrF2_score: 0.632
- bleu: 45.1
- brevity_penalty: 0.965
- ref_len: 4157.0
- src_name: Catalan
- tgt_name: Dutch
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: nl
- prefer_old: False
- long_pair: cat-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "nl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-nl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"nl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### cat-nld
* source group: Catalan
* target group: Dutch
* OPUS readme: cat-nld
* model: transformer-align
* source language(s): cat
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.1, chr-F: 0.632
### System Info:
* hf\_name: cat-nld
* source\_languages: cat
* target\_languages: nld
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'nl']
* src\_constituents: {'cat'}
* tgt\_constituents: {'nld'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: cat
* tgt\_alpha3: nld
* short\_pair: ca-nl
* chrF2\_score: 0.632
* bleu: 45.1
* brevity\_penalty: 0.965
* ref\_len: 4157.0
* src\_name: Catalan
* tgt\_name: Dutch
* train\_date: 2020-06-16
* src\_alpha2: ca
* tgt\_alpha2: nl
* prefer\_old: False
* long\_pair: cat-nld
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### cat-nld\n\n\n* source group: Catalan\n* target group: Dutch\n* OPUS readme: cat-nld\n* model: transformer-align\n* source language(s): cat\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.1, chr-F: 0.632",
"### System Info:\n\n\n* hf\\_name: cat-nld\n* source\\_languages: cat\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'nl']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: nld\n* short\\_pair: ca-nl\n* chrF2\\_score: 0.632\n* bleu: 45.1\n* brevity\\_penalty: 0.965\n* ref\\_len: 4157.0\n* src\\_name: Catalan\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: cat-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### cat-nld\n\n\n* source group: Catalan\n* target group: Dutch\n* OPUS readme: cat-nld\n* model: transformer-align\n* source language(s): cat\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.1, chr-F: 0.632",
"### System Info:\n\n\n* hf\\_name: cat-nld\n* source\\_languages: cat\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'nl']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: nld\n* short\\_pair: ca-nl\n* chrF2\\_score: 0.632\n* bleu: 45.1\n* brevity\\_penalty: 0.965\n* ref\\_len: 4157.0\n* src\\_name: Catalan\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: cat-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### cat-por
* source group: Catalan
* target group: Portuguese
* OPUS readme: [cat-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.por | 44.9 | 0.658 |
### System Info:
- hf_name: cat-por
- source_languages: cat
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'pt']
- src_constituents: {'cat'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt
- src_alpha3: cat
- tgt_alpha3: por
- short_pair: ca-pt
- chrF2_score: 0.6579999999999999
- bleu: 44.9
- brevity_penalty: 0.953
- ref_len: 5847.0
- src_name: Catalan
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: ca
- tgt_alpha2: pt
- prefer_old: False
- long_pair: cat-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "pt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-pt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"pt"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### cat-por
* source group: Catalan
* target group: Portuguese
* OPUS readme: cat-por
* model: transformer-align
* source language(s): cat
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.9, chr-F: 0.658
### System Info:
* hf\_name: cat-por
* source\_languages: cat
* target\_languages: por
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'pt']
* src\_constituents: {'cat'}
* tgt\_constituents: {'por'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: cat
* tgt\_alpha3: por
* short\_pair: ca-pt
* chrF2\_score: 0.6579999999999999
* bleu: 44.9
* brevity\_penalty: 0.953
* ref\_len: 5847.0
* src\_name: Catalan
* tgt\_name: Portuguese
* train\_date: 2020-06-17
* src\_alpha2: ca
* tgt\_alpha2: pt
* prefer\_old: False
* long\_pair: cat-por
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### cat-por\n\n\n* source group: Catalan\n* target group: Portuguese\n* OPUS readme: cat-por\n* model: transformer-align\n* source language(s): cat\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.9, chr-F: 0.658",
"### System Info:\n\n\n* hf\\_name: cat-por\n* source\\_languages: cat\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'pt']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: por\n* short\\_pair: ca-pt\n* chrF2\\_score: 0.6579999999999999\n* bleu: 44.9\n* brevity\\_penalty: 0.953\n* ref\\_len: 5847.0\n* src\\_name: Catalan\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ca\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: cat-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### cat-por\n\n\n* source group: Catalan\n* target group: Portuguese\n* OPUS readme: cat-por\n* model: transformer-align\n* source language(s): cat\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.9, chr-F: 0.658",
"### System Info:\n\n\n* hf\\_name: cat-por\n* source\\_languages: cat\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'pt']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: por\n* short\\_pair: ca-pt\n* chrF2\\_score: 0.6579999999999999\n* bleu: 44.9\n* brevity\\_penalty: 0.953\n* ref\\_len: 5847.0\n* src\\_name: Catalan\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ca\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: cat-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### cat-ukr
* source group: Catalan
* target group: Ukrainian
* OPUS readme: [cat-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ukr/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.ukr | 28.6 | 0.503 |
### System Info:
- hf_name: cat-ukr
- source_languages: cat
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'uk']
- src_constituents: {'cat'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: ukr
- short_pair: ca-uk
- chrF2_score: 0.503
- bleu: 28.6
- brevity_penalty: 0.9670000000000001
- ref_len: 2438.0
- src_name: Catalan
- tgt_name: Ukrainian
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: uk
- prefer_old: False
- long_pair: cat-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ca-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### cat-ukr
* source group: Catalan
* target group: Ukrainian
* OPUS readme: cat-ukr
* model: transformer-align
* source language(s): cat
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.6, chr-F: 0.503
### System Info:
* hf\_name: cat-ukr
* source\_languages: cat
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'uk']
* src\_constituents: {'cat'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: cat
* tgt\_alpha3: ukr
* short\_pair: ca-uk
* chrF2\_score: 0.503
* bleu: 28.6
* brevity\_penalty: 0.9670000000000001
* ref\_len: 2438.0
* src\_name: Catalan
* tgt\_name: Ukrainian
* train\_date: 2020-06-16
* src\_alpha2: ca
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: cat-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### cat-ukr\n\n\n* source group: Catalan\n* target group: Ukrainian\n* OPUS readme: cat-ukr\n* model: transformer-align\n* source language(s): cat\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.503",
"### System Info:\n\n\n* hf\\_name: cat-ukr\n* source\\_languages: cat\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'uk']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: ukr\n* short\\_pair: ca-uk\n* chrF2\\_score: 0.503\n* bleu: 28.6\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 2438.0\n* src\\_name: Catalan\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: cat-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### cat-ukr\n\n\n* source group: Catalan\n* target group: Ukrainian\n* OPUS readme: cat-ukr\n* model: transformer-align\n* source language(s): cat\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.503",
"### System Info:\n\n\n* hf\\_name: cat-ukr\n* source\\_languages: cat\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'uk']\n* src\\_constituents: {'cat'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cat\n* tgt\\_alpha3: ukr\n* short\\_pair: ca-uk\n* chrF2\\_score: 0.503\n* bleu: 28.6\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 2438.0\n* src\\_name: Catalan\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ca\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: cat-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### cau-eng
* source group: Caucasian languages
* target group: English
* OPUS readme: [cau-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md)
* model: transformer
* source language(s): abk ady che kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.abk-eng.abk.eng | 0.3 | 0.134 |
| Tatoeba-test.ady-eng.ady.eng | 0.4 | 0.104 |
| Tatoeba-test.che-eng.che.eng | 0.6 | 0.128 |
| Tatoeba-test.kat-eng.kat.eng | 18.6 | 0.366 |
| Tatoeba-test.multi.eng | 16.6 | 0.351 |
### System Info:
- hf_name: cau-eng
- source_languages: cau
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ab', 'ka', 'ce', 'cau', 'en']
- src_constituents: {'abk', 'kat', 'che', 'ady'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cau
- tgt_alpha3: eng
- short_pair: cau-en
- chrF2_score: 0.35100000000000003
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 6285.0
- src_name: Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cau
- tgt_alpha2: en
- prefer_old: False
- long_pair: cau-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ab", "ka", "ce", "cau", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-cau-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ab",
"ka",
"ce",
"cau",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ab",
"ka",
"ce",
"cau",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ab #ka #ce #cau #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### cau-eng
* source group: Caucasian languages
* target group: English
* OPUS readme: cau-eng
* model: transformer
* source language(s): abk ady che kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 0.3, chr-F: 0.134
testset: URL, BLEU: 0.4, chr-F: 0.104
testset: URL, BLEU: 0.6, chr-F: 0.128
testset: URL, BLEU: 18.6, chr-F: 0.366
testset: URL, BLEU: 16.6, chr-F: 0.351
### System Info:
* hf\_name: cau-eng
* source\_languages: cau
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ab', 'ka', 'ce', 'cau', 'en']
* src\_constituents: {'abk', 'kat', 'che', 'ady'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: cau
* tgt\_alpha3: eng
* short\_pair: cau-en
* chrF2\_score: 0.35100000000000003
* bleu: 16.6
* brevity\_penalty: 1.0
* ref\_len: 6285.0
* src\_name: Caucasian languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: cau
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: cau-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### cau-eng\n\n\n* source group: Caucasian languages\n* target group: English\n* OPUS readme: cau-eng\n* model: transformer\n* source language(s): abk ady che kat\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.3, chr-F: 0.134\ntestset: URL, BLEU: 0.4, chr-F: 0.104\ntestset: URL, BLEU: 0.6, chr-F: 0.128\ntestset: URL, BLEU: 18.6, chr-F: 0.366\ntestset: URL, BLEU: 16.6, chr-F: 0.351",
"### System Info:\n\n\n* hf\\_name: cau-eng\n* source\\_languages: cau\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ab', 'ka', 'ce', 'cau', 'en']\n* src\\_constituents: {'abk', 'kat', 'che', 'ady'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cau\n* tgt\\_alpha3: eng\n* short\\_pair: cau-en\n* chrF2\\_score: 0.35100000000000003\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 6285.0\n* src\\_name: Caucasian languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: cau\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: cau-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ab #ka #ce #cau #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### cau-eng\n\n\n* source group: Caucasian languages\n* target group: English\n* OPUS readme: cau-eng\n* model: transformer\n* source language(s): abk ady che kat\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.3, chr-F: 0.134\ntestset: URL, BLEU: 0.4, chr-F: 0.104\ntestset: URL, BLEU: 0.6, chr-F: 0.128\ntestset: URL, BLEU: 18.6, chr-F: 0.366\ntestset: URL, BLEU: 16.6, chr-F: 0.351",
"### System Info:\n\n\n* hf\\_name: cau-eng\n* source\\_languages: cau\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ab', 'ka', 'ce', 'cau', 'en']\n* src\\_constituents: {'abk', 'kat', 'che', 'ady'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: cau\n* tgt\\_alpha3: eng\n* short\\_pair: cau-en\n* chrF2\\_score: 0.35100000000000003\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 6285.0\n* src\\_name: Caucasian languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: cau\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: cau-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ccs-eng
* source group: South Caucasian languages
* target group: English
* OPUS readme: [ccs-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md)
* model: transformer
* source language(s): kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kat-eng.kat.eng | 18.0 | 0.357 |
| Tatoeba-test.multi.eng | 18.0 | 0.357 |
### System Info:
- hf_name: ccs-eng
- source_languages: ccs
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ka', 'ccs', 'en']
- src_constituents: {'kat'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt
- src_alpha3: ccs
- tgt_alpha3: eng
- short_pair: ccs-en
- chrF2_score: 0.35700000000000004
- bleu: 18.0
- brevity_penalty: 1.0
- ref_len: 5992.0
- src_name: South Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: ccs
- tgt_alpha2: en
- prefer_old: False
- long_pair: ccs-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ka", "ccs", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ccs-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ka",
"ccs",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ka",
"ccs",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ka #ccs #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ccs-eng
* source group: South Caucasian languages
* target group: English
* OPUS readme: ccs-eng
* model: transformer
* source language(s): kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.0, chr-F: 0.357
testset: URL, BLEU: 18.0, chr-F: 0.357
### System Info:
* hf\_name: ccs-eng
* source\_languages: ccs
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ka', 'ccs', 'en']
* src\_constituents: {'kat'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ccs
* tgt\_alpha3: eng
* short\_pair: ccs-en
* chrF2\_score: 0.35700000000000004
* bleu: 18.0
* brevity\_penalty: 1.0
* ref\_len: 5992.0
* src\_name: South Caucasian languages
* tgt\_name: English
* train\_date: 2020-07-31
* src\_alpha2: ccs
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: ccs-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ccs-eng\n\n\n* source group: South Caucasian languages\n* target group: English\n* OPUS readme: ccs-eng\n* model: transformer\n* source language(s): kat\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.0, chr-F: 0.357\ntestset: URL, BLEU: 18.0, chr-F: 0.357",
"### System Info:\n\n\n* hf\\_name: ccs-eng\n* source\\_languages: ccs\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'ccs', 'en']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ccs\n* tgt\\_alpha3: eng\n* short\\_pair: ccs-en\n* chrF2\\_score: 0.35700000000000004\n* bleu: 18.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 5992.0\n* src\\_name: South Caucasian languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: ccs\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ccs-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ka #ccs #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ccs-eng\n\n\n* source group: South Caucasian languages\n* target group: English\n* OPUS readme: ccs-eng\n* model: transformer\n* source language(s): kat\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.0, chr-F: 0.357\ntestset: URL, BLEU: 18.0, chr-F: 0.357",
"### System Info:\n\n\n* hf\\_name: ccs-eng\n* source\\_languages: ccs\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'ccs', 'en']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ccs\n* tgt\\_alpha3: eng\n* short\\_pair: ccs-en\n* chrF2\\_score: 0.35700000000000004\n* bleu: 18.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 5992.0\n* src\\_name: South Caucasian languages\n* tgt\\_name: English\n* train\\_date: 2020-07-31\n* src\\_alpha2: ccs\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ccs-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ceb-eng
* source group: Cebuano
* target group: English
* OPUS readme: [ceb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md)
* model: transformer-align
* source language(s): ceb
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ceb.eng | 21.5 | 0.387 |
### System Info:
- hf_name: ceb-eng
- source_languages: ceb
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ceb', 'en']
- src_constituents: {'ceb'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt
- src_alpha3: ceb
- tgt_alpha3: eng
- short_pair: ceb-en
- chrF2_score: 0.387
- bleu: 21.5
- brevity_penalty: 1.0
- ref_len: 2293.0
- src_name: Cebuano
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ceb
- tgt_alpha2: en
- prefer_old: False
- long_pair: ceb-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ceb", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ceb-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ceb",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ceb",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ceb #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ceb-eng
* source group: Cebuano
* target group: English
* OPUS readme: ceb-eng
* model: transformer-align
* source language(s): ceb
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.5, chr-F: 0.387
### System Info:
* hf\_name: ceb-eng
* source\_languages: ceb
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ceb', 'en']
* src\_constituents: {'ceb'}
* tgt\_constituents: {'eng'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ceb
* tgt\_alpha3: eng
* short\_pair: ceb-en
* chrF2\_score: 0.387
* bleu: 21.5
* brevity\_penalty: 1.0
* ref\_len: 2293.0
* src\_name: Cebuano
* tgt\_name: English
* train\_date: 2020-06-17
* src\_alpha2: ceb
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: ceb-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ceb-eng\n\n\n* source group: Cebuano\n* target group: English\n* OPUS readme: ceb-eng\n* model: transformer-align\n* source language(s): ceb\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.387",
"### System Info:\n\n\n* hf\\_name: ceb-eng\n* source\\_languages: ceb\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ceb', 'en']\n* src\\_constituents: {'ceb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ceb\n* tgt\\_alpha3: eng\n* short\\_pair: ceb-en\n* chrF2\\_score: 0.387\n* bleu: 21.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 2293.0\n* src\\_name: Cebuano\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ceb\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ceb-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ceb #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ceb-eng\n\n\n* source group: Cebuano\n* target group: English\n* OPUS readme: ceb-eng\n* model: transformer-align\n* source language(s): ceb\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.387",
"### System Info:\n\n\n* hf\\_name: ceb-eng\n* source\\_languages: ceb\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ceb', 'en']\n* src\\_constituents: {'ceb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ceb\n* tgt\\_alpha3: eng\n* short\\_pair: ceb-en\n* chrF2\\_score: 0.387\n* bleu: 21.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 2293.0\n* src\\_name: Cebuano\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ceb\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ceb-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
Subsets and Splits