modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 06:27:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
499 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 06:26:25
card
stringlengths
11
1.01M
Helsinki-NLP/opus-mt-de-eu
Helsinki-NLP
2023-08-16T11:27:50Z
166
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "eu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - de - eu tags: - translation license: apache-2.0 --- ### deu-eus * source group: German * target group: Basque * OPUS readme: [deu-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-eus/README.md) * model: transformer-align * source language(s): deu * target language(s): eus * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.deu.eus | 31.8 | 0.574 | ### System Info: - hf_name: deu-eus - source_languages: deu - target_languages: eus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-eus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'eu'] - src_constituents: {'deu'} - tgt_constituents: {'eus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.test.txt - src_alpha3: deu - tgt_alpha3: eus - short_pair: de-eu - chrF2_score: 0.574 - bleu: 31.8 - brevity_penalty: 0.9209999999999999 - ref_len: 2829.0 - src_name: German - tgt_name: Basque - train_date: 2020-06-16 - src_alpha2: de - tgt_alpha2: eu - prefer_old: False - long_pair: deu-eus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Nextcloud-AI/opus-mt-de-es
Nextcloud-AI
2023-08-16T11:27:48Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:38:02Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-es * source languages: de * target languages: es * OPUS readme: [de-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.es | 48.5 | 0.676 |
Helsinki-NLP/opus-mt-de-es
Helsinki-NLP
2023-08-16T11:27:48Z
32,010
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-es * source languages: de * target languages: es * OPUS readme: [de-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.es | 48.5 | 0.676 |
Helsinki-NLP/opus-mt-de-en
Helsinki-NLP
2023-08-16T11:27:46Z
673,311
44
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "de", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-en * source languages: de * target languages: en * OPUS readme: [de-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.de.en | 29.4 | 0.557 | | news-test2008.de.en | 27.8 | 0.548 | | newstest2009.de.en | 26.8 | 0.543 | | newstest2010.de.en | 30.2 | 0.584 | | newstest2011.de.en | 27.4 | 0.556 | | newstest2012.de.en | 29.1 | 0.569 | | newstest2013.de.en | 32.1 | 0.583 | | newstest2014-deen.de.en | 34.0 | 0.600 | | newstest2015-ende.de.en | 34.2 | 0.599 | | newstest2016-ende.de.en | 40.4 | 0.649 | | newstest2017-ende.de.en | 35.7 | 0.610 | | newstest2018-ende.de.en | 43.7 | 0.667 | | newstest2019-deen.de.en | 40.1 | 0.642 | | Tatoeba.de.en | 55.4 | 0.707 |
Helsinki-NLP/opus-mt-de-efi
Helsinki-NLP
2023-08-16T11:27:43Z
101
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "efi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-efi * source languages: de * target languages: efi * OPUS readme: [de-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-efi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.efi | 24.2 | 0.451 |
Helsinki-NLP/opus-mt-de-ee
Helsinki-NLP
2023-08-16T11:27:42Z
113
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "ee", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-ee * source languages: de * target languages: ee * OPUS readme: [de-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.ee | 24.6 | 0.463 |
Helsinki-NLP/opus-mt-de-de
Helsinki-NLP
2023-08-16T11:27:41Z
207
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-de * source languages: de * target languages: de * OPUS readme: [de-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.de | 40.7 | 0.616 |
Helsinki-NLP/opus-mt-de-ca
Helsinki-NLP
2023-08-16T11:27:37Z
181
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "ca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - de - ca tags: - translation license: apache-2.0 --- ### deu-cat * source group: German * target group: Catalan * OPUS readme: [deu-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-cat/README.md) * model: transformer-align * source language(s): deu * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.deu.cat | 37.4 | 0.582 | ### System Info: - hf_name: deu-cat - source_languages: deu - target_languages: cat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-cat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'ca'] - src_constituents: {'deu'} - tgt_constituents: {'cat'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-cat/opus-2020-06-16.test.txt - src_alpha3: deu - tgt_alpha3: cat - short_pair: de-ca - chrF2_score: 0.5820000000000001 - bleu: 37.4 - brevity_penalty: 0.956 - ref_len: 5507.0 - src_name: German - tgt_name: Catalan - train_date: 2020-06-16 - src_alpha2: de - tgt_alpha2: ca - prefer_old: False - long_pair: deu-cat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-de-bi
Helsinki-NLP
2023-08-16T11:27:35Z
110
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "bi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-bi * source languages: de * target languages: bi * OPUS readme: [de-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.bi | 25.7 | 0.450 |
Helsinki-NLP/opus-mt-de-bcl
Helsinki-NLP
2023-08-16T11:27:33Z
105
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "bcl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-bcl * source languages: de * target languages: bcl * OPUS readme: [de-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bcl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.bcl | 34.6 | 0.563 |
Helsinki-NLP/opus-mt-de-af
Helsinki-NLP
2023-08-16T11:27:29Z
259
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "af", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - de - af tags: - translation license: apache-2.0 --- ### deu-afr * source group: German * target group: Afrikaans * OPUS readme: [deu-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-afr/README.md) * model: transformer-align * source language(s): deu * target language(s): afr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.deu.afr | 51.3 | 0.690 | ### System Info: - hf_name: deu-afr - source_languages: deu - target_languages: afr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-afr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'af'] - src_constituents: {'deu'} - tgt_constituents: {'afr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.test.txt - src_alpha3: deu - tgt_alpha3: afr - short_pair: de-af - chrF2_score: 0.69 - bleu: 51.3 - brevity_penalty: 1.0 - ref_len: 9507.0 - src_name: German - tgt_name: Afrikaans - train_date: 2020-06-17 - src_alpha2: de - tgt_alpha2: af - prefer_old: False - long_pair: deu-afr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-de-ZH
Helsinki-NLP
2023-08-16T11:27:28Z
379
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-ZH * source languages: de * target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh * OPUS readme: [de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.de.zh | 24.4 | 0.335 |
Nextcloud-AI/opus-mt-de-zh
Nextcloud-AI
2023-08-16T11:27:28Z
106
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:38:53Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-ZH * source languages: de * target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh * OPUS readme: [de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.de.zh | 24.4 | 0.335 |
Helsinki-NLP/opus-mt-da-fi
Helsinki-NLP
2023-08-16T11:27:24Z
315
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "da", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-da-fi * source languages: da * target languages: fi * OPUS readme: [da-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.da.fi | 39.0 | 0.629 |
Helsinki-NLP/opus-mt-da-eo
Helsinki-NLP
2023-08-16T11:27:22Z
108
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "da", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - da - eo tags: - translation license: apache-2.0 --- ### dan-epo * source group: Danish * target group: Esperanto * OPUS readme: [dan-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-epo/README.md) * model: transformer-align * source language(s): dan * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.dan.epo | 23.6 | 0.432 | ### System Info: - hf_name: dan-epo - source_languages: dan - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['da', 'eo'] - src_constituents: {'dan'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.test.txt - src_alpha3: dan - tgt_alpha3: epo - short_pair: da-eo - chrF2_score: 0.43200000000000005 - bleu: 23.6 - brevity_penalty: 0.9420000000000001 - ref_len: 69856.0 - src_name: Danish - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: da - tgt_alpha2: eo - prefer_old: False - long_pair: dan-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-da-de
Helsinki-NLP
2023-08-16T11:27:20Z
16,695
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "da", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-da-de * source languages: da * target languages: de * OPUS readme: [da-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-de/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-de/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-de/opus-2020-01-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.da.de | 57.4 | 0.740 |
Helsinki-NLP/opus-mt-cy-en
Helsinki-NLP
2023-08-16T11:27:19Z
4,822
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "cy", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-cy-en * source languages: cy * target languages: en * OPUS readme: [cy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cy-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/cy-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cy-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cy-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.cy.en | 33.0 | 0.525 |
Helsinki-NLP/opus-mt-csg-es
Helsinki-NLP
2023-08-16T11:27:15Z
117
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "csg", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-csg-es * source languages: csg * target languages: es * OPUS readme: [csg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/csg-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/csg-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/csg-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/csg-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.csg.es | 93.1 | 0.952 |
Helsinki-NLP/opus-mt-cs-uk
Helsinki-NLP
2023-08-16T11:27:14Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "cs", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - cs - uk tags: - translation license: apache-2.0 --- ### ces-ukr * source group: Czech * target group: Ukrainian * OPUS readme: [ces-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-ukr/README.md) * model: transformer-align * source language(s): ces * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ces.ukr | 50.9 | 0.680 | ### System Info: - hf_name: ces-ukr - source_languages: ces - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['cs', 'uk'] - src_constituents: {'ces'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.test.txt - src_alpha3: ces - tgt_alpha3: ukr - short_pair: cs-uk - chrF2_score: 0.68 - bleu: 50.9 - brevity_penalty: 0.9940000000000001 - ref_len: 8891.0 - src_name: Czech - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: cs - tgt_alpha2: uk - prefer_old: False - long_pair: ces-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-cs-fr
Helsinki-NLP
2023-08-16T11:27:12Z
124
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "cs", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-cs-fr * source languages: cs * target languages: fr * OPUS readme: [cs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.cs.fr | 21.0 | 0.488 |
Helsinki-NLP/opus-mt-cs-fi
Helsinki-NLP
2023-08-16T11:27:11Z
117
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "cs", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-cs-fi * source languages: cs * target languages: fi * OPUS readme: [cs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.cs.fi | 25.5 | 0.523 |
Helsinki-NLP/opus-mt-cs-eo
Helsinki-NLP
2023-08-16T11:27:10Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "cs", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - cs - eo tags: - translation license: apache-2.0 --- ### ces-epo * source group: Czech * target group: Esperanto * OPUS readme: [ces-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-epo/README.md) * model: transformer-align * source language(s): ces * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ces.epo | 26.0 | 0.459 | ### System Info: - hf_name: ces-epo - source_languages: ces - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['cs', 'eo'] - src_constituents: {'ces'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-epo/opus-2020-06-16.test.txt - src_alpha3: ces - tgt_alpha3: epo - short_pair: cs-eo - chrF2_score: 0.45899999999999996 - bleu: 26.0 - brevity_penalty: 0.94 - ref_len: 24901.0 - src_name: Czech - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: cs - tgt_alpha2: eo - prefer_old: False - long_pair: ces-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-crs-fr
Helsinki-NLP
2023-08-16T11:27:06Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "crs", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-crs-fr * source languages: crs * target languages: fr * OPUS readme: [crs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.crs.fr | 29.4 | 0.475 |
Helsinki-NLP/opus-mt-crs-fi
Helsinki-NLP
2023-08-16T11:27:05Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "crs", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-crs-fi * source languages: crs * target languages: fi * OPUS readme: [crs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.crs.fi | 25.6 | 0.479 |
Helsinki-NLP/opus-mt-crs-de
Helsinki-NLP
2023-08-16T11:27:02Z
112
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "crs", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-crs-de * source languages: crs * target languages: de * OPUS readme: [crs-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.crs.de | 20.4 | 0.397 |
Helsinki-NLP/opus-mt-cpf-en
Helsinki-NLP
2023-08-16T11:26:59Z
112
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ht", "cpf", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ht - cpf - en tags: - translation license: apache-2.0 --- ### cpf-eng * source group: Creoles and pidgins, French‑based * target group: English * OPUS readme: [cpf-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md) * model: transformer * source language(s): gcf_Latn hat mfe * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.gcf-eng.gcf.eng | 8.4 | 0.229 | | Tatoeba-test.hat-eng.hat.eng | 28.0 | 0.421 | | Tatoeba-test.mfe-eng.mfe.eng | 66.0 | 0.808 | | Tatoeba-test.multi.eng | 16.3 | 0.323 | ### System Info: - hf_name: cpf-eng - source_languages: cpf - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ht', 'cpf', 'en'] - src_constituents: {'gcf_Latn', 'hat', 'mfe'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt - src_alpha3: cpf - tgt_alpha3: eng - short_pair: cpf-en - chrF2_score: 0.32299999999999995 - bleu: 16.3 - brevity_penalty: 1.0 - ref_len: 990.0 - src_name: Creoles and pidgins, French‑based - tgt_name: English - train_date: 2020-07-31 - src_alpha2: cpf - tgt_alpha2: en - prefer_old: False - long_pair: cpf-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-chk-sv
Helsinki-NLP
2023-08-16T11:26:58Z
112
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "chk", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-chk-sv * source languages: chk * target languages: sv * OPUS readme: [chk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.chk.sv | 23.6 | 0.406 |
Helsinki-NLP/opus-mt-ceb-sv
Helsinki-NLP
2023-08-16T11:26:53Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ceb", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ceb-sv * source languages: ceb * target languages: sv * OPUS readme: [ceb-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ceb.sv | 35.5 | 0.552 |
Helsinki-NLP/opus-mt-ceb-es
Helsinki-NLP
2023-08-16T11:26:50Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ceb", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ceb-es * source languages: ceb * target languages: es * OPUS readme: [ceb-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ceb.es | 31.6 | 0.508 |
Helsinki-NLP/opus-mt-ceb-en
Helsinki-NLP
2023-08-16T11:26:49Z
1,276
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ceb", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ceb - en tags: - translation license: apache-2.0 --- ### ceb-eng * source group: Cebuano * target group: English * OPUS readme: [ceb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md) * model: transformer-align * source language(s): ceb * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ceb.eng | 21.5 | 0.387 | ### System Info: - hf_name: ceb-eng - source_languages: ceb - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ceb', 'en'] - src_constituents: {'ceb'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt - src_alpha3: ceb - tgt_alpha3: eng - short_pair: ceb-en - chrF2_score: 0.387 - bleu: 21.5 - brevity_penalty: 1.0 - ref_len: 2293.0 - src_name: Cebuano - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ceb - tgt_alpha2: en - prefer_old: False - long_pair: ceb-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-cau-en
Helsinki-NLP
2023-08-16T11:26:47Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ab", "ka", "ce", "cau", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ab - ka - ce - cau - en tags: - translation license: apache-2.0 --- ### cau-eng * source group: Caucasian languages * target group: English * OPUS readme: [cau-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md) * model: transformer * source language(s): abk ady che kat * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.abk-eng.abk.eng | 0.3 | 0.134 | | Tatoeba-test.ady-eng.ady.eng | 0.4 | 0.104 | | Tatoeba-test.che-eng.che.eng | 0.6 | 0.128 | | Tatoeba-test.kat-eng.kat.eng | 18.6 | 0.366 | | Tatoeba-test.multi.eng | 16.6 | 0.351 | ### System Info: - hf_name: cau-eng - source_languages: cau - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ab', 'ka', 'ce', 'cau', 'en'] - src_constituents: {'abk', 'kat', 'che', 'ady'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt - src_alpha3: cau - tgt_alpha3: eng - short_pair: cau-en - chrF2_score: 0.35100000000000003 - bleu: 16.6 - brevity_penalty: 1.0 - ref_len: 6285.0 - src_name: Caucasian languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: cau - tgt_alpha2: en - prefer_old: False - long_pair: cau-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ca-uk
Helsinki-NLP
2023-08-16T11:26:46Z
145
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ca - uk tags: - translation license: apache-2.0 --- ### cat-ukr * source group: Catalan * target group: Ukrainian * OPUS readme: [cat-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ukr/README.md) * model: transformer-align * source language(s): cat * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.cat.ukr | 28.6 | 0.503 | ### System Info: - hf_name: cat-ukr - source_languages: cat - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'uk'] - src_constituents: {'cat'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.test.txt - src_alpha3: cat - tgt_alpha3: ukr - short_pair: ca-uk - chrF2_score: 0.503 - bleu: 28.6 - brevity_penalty: 0.9670000000000001 - ref_len: 2438.0 - src_name: Catalan - tgt_name: Ukrainian - train_date: 2020-06-16 - src_alpha2: ca - tgt_alpha2: uk - prefer_old: False - long_pair: cat-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ca-pt
Helsinki-NLP
2023-08-16T11:26:45Z
132
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "pt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ca - pt tags: - translation license: apache-2.0 --- ### cat-por * source group: Catalan * target group: Portuguese * OPUS readme: [cat-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md) * model: transformer-align * source language(s): cat * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.cat.por | 44.9 | 0.658 | ### System Info: - hf_name: cat-por - source_languages: cat - target_languages: por - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'pt'] - src_constituents: {'cat'} - tgt_constituents: {'por'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt - src_alpha3: cat - tgt_alpha3: por - short_pair: ca-pt - chrF2_score: 0.6579999999999999 - bleu: 44.9 - brevity_penalty: 0.953 - ref_len: 5847.0 - src_name: Catalan - tgt_name: Portuguese - train_date: 2020-06-17 - src_alpha2: ca - tgt_alpha2: pt - prefer_old: False - long_pair: cat-por - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ca-es
Helsinki-NLP
2023-08-16T11:26:40Z
832
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ca-es * source languages: ca * target languages: es * OPUS readme: [ca-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ca.es | 74.9 | 0.863 |
Helsinki-NLP/opus-mt-ca-en
Helsinki-NLP
2023-08-16T11:26:39Z
8,325
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ca-en * source languages: ca * target languages: en * OPUS readme: [ca-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ca.en | 51.4 | 0.678 |
Helsinki-NLP/opus-mt-bzs-sv
Helsinki-NLP
2023-08-16T11:26:37Z
122
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bzs", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bzs-sv * source languages: bzs * target languages: sv * OPUS readme: [bzs-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bzs.sv | 30.7 | 0.489 |
Helsinki-NLP/opus-mt-bzs-en
Helsinki-NLP
2023-08-16T11:26:32Z
261
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bzs", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bzs-en * source languages: bzs * target languages: en * OPUS readme: [bzs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bzs.en | 44.5 | 0.605 |
Helsinki-NLP/opus-mt-bnt-en
Helsinki-NLP
2023-08-16T11:26:31Z
198
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - sn - zu - rw - lg - ts - ln - ny - xh - rn - bnt - en tags: - translation license: apache-2.0 --- ### bnt-eng * source group: Bantu languages * target group: English * OPUS readme: [bnt-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md) * model: transformer * source language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kin-eng.kin.eng | 31.7 | 0.481 | | Tatoeba-test.lin-eng.lin.eng | 8.3 | 0.271 | | Tatoeba-test.lug-eng.lug.eng | 5.3 | 0.128 | | Tatoeba-test.multi.eng | 23.1 | 0.394 | | Tatoeba-test.nya-eng.nya.eng | 38.3 | 0.527 | | Tatoeba-test.run-eng.run.eng | 26.6 | 0.431 | | Tatoeba-test.sna-eng.sna.eng | 27.5 | 0.440 | | Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.195 | | Tatoeba-test.toi-eng.toi.eng | 16.2 | 0.342 | | Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 | | Tatoeba-test.umb-eng.umb.eng | 8.4 | 0.231 | | Tatoeba-test.xho-eng.xho.eng | 37.2 | 0.554 | | Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.576 | ### System Info: - hf_name: bnt-eng - source_languages: bnt - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en'] - src_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt - src_alpha3: bnt - tgt_alpha3: eng - short_pair: bnt-en - chrF2_score: 0.39399999999999996 - bleu: 23.1 - brevity_penalty: 1.0 - ref_len: 14565.0 - src_name: Bantu languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: bnt - tgt_alpha2: en - prefer_old: False - long_pair: bnt-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-bi-fr
Helsinki-NLP
2023-08-16T11:26:28Z
109
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bi", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bi-fr * source languages: bi * target languages: fr * OPUS readme: [bi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.fr | 21.5 | 0.382 |
Helsinki-NLP/opus-mt-bi-es
Helsinki-NLP
2023-08-16T11:26:27Z
112
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bi", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bi-es * source languages: bi * target languages: es * OPUS readme: [bi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.es | 21.1 | 0.388 |
Helsinki-NLP/opus-mt-bi-en
Helsinki-NLP
2023-08-16T11:26:26Z
142
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bi-en * source languages: bi * target languages: en * OPUS readme: [bi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.en | 30.3 | 0.458 |
Helsinki-NLP/opus-mt-bg-uk
Helsinki-NLP
2023-08-16T11:26:25Z
139
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - bg - uk tags: - translation license: apache-2.0 --- ### bul-ukr * source group: Bulgarian * target group: Ukrainian * OPUS readme: [bul-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md) * model: transformer-align * source language(s): bul * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.ukr | 49.2 | 0.683 | ### System Info: - hf_name: bul-ukr - source_languages: bul - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'uk'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt - src_alpha3: bul - tgt_alpha3: ukr - short_pair: bg-uk - chrF2_score: 0.6829999999999999 - bleu: 49.2 - brevity_penalty: 0.983 - ref_len: 4932.0 - src_name: Bulgarian - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: bg - tgt_alpha2: uk - prefer_old: False - long_pair: bul-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-bg-tr
Helsinki-NLP
2023-08-16T11:26:23Z
114
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - bg - tr tags: - translation license: apache-2.0 --- ### bul-tur * source group: Bulgarian * target group: Turkish * OPUS readme: [bul-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md) * model: transformer * source language(s): bul bul_Latn * target language(s): tur * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.tur | 40.9 | 0.687 | ### System Info: - hf_name: bul-tur - source_languages: bul - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'tr'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: tur - short_pair: bg-tr - chrF2_score: 0.687 - bleu: 40.9 - brevity_penalty: 0.946 - ref_len: 4948.0 - src_name: Bulgarian - tgt_name: Turkish - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: tr - prefer_old: False - long_pair: bul-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-bg-sv
Helsinki-NLP
2023-08-16T11:26:22Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bg-sv * source languages: bg * target languages: sv * OPUS readme: [bg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bg.sv | 29.1 | 0.494 |
Helsinki-NLP/opus-mt-bg-it
Helsinki-NLP
2023-08-16T11:26:20Z
115
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - bg - it tags: - translation license: apache-2.0 --- ### bul-ita * source group: Bulgarian * target group: Italian * OPUS readme: [bul-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md) * model: transformer * source language(s): bul * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.ita | 43.1 | 0.653 | ### System Info: - hf_name: bul-ita - source_languages: bul - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'it'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: ita - short_pair: bg-it - chrF2_score: 0.653 - bleu: 43.1 - brevity_penalty: 0.987 - ref_len: 16951.0 - src_name: Bulgarian - tgt_name: Italian - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: it - prefer_old: False - long_pair: bul-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-bg-fr
Helsinki-NLP
2023-08-16T11:26:19Z
160
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - bg - fr tags: - translation license: apache-2.0 --- ### bul-fra * source group: Bulgarian * target group: French * OPUS readme: [bul-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md) * model: transformer * source language(s): bul * target language(s): fra * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.fra | 53.7 | 0.693 | ### System Info: - hf_name: bul-fra - source_languages: bul - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'fr'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: fra - short_pair: bg-fr - chrF2_score: 0.693 - bleu: 53.7 - brevity_penalty: 0.977 - ref_len: 3669.0 - src_name: Bulgarian - tgt_name: French - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: fr - prefer_old: False - long_pair: bul-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-bg-eo
Helsinki-NLP
2023-08-16T11:26:15Z
114
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - bg - eo tags: - translation license: apache-2.0 --- ### bul-epo * source group: Bulgarian * target group: Esperanto * OPUS readme: [bul-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md) * model: transformer-align * source language(s): bul * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.epo | 24.5 | 0.438 | ### System Info: - hf_name: bul-epo - source_languages: bul - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'eo'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt - src_alpha3: bul - tgt_alpha3: epo - short_pair: bg-eo - chrF2_score: 0.43799999999999994 - bleu: 24.5 - brevity_penalty: 0.9670000000000001 - ref_len: 4043.0 - src_name: Bulgarian - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: bg - tgt_alpha2: eo - prefer_old: False - long_pair: bul-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-bg-de
Helsinki-NLP
2023-08-16T11:26:13Z
123
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - bg - de tags: - translation license: apache-2.0 --- ### bul-deu * source group: Bulgarian * target group: German * OPUS readme: [bul-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md) * model: transformer * source language(s): bul * target language(s): deu * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.deu | 49.3 | 0.676 | ### System Info: - hf_name: bul-deu - source_languages: bul - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'de'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: deu - short_pair: bg-de - chrF2_score: 0.6759999999999999 - bleu: 49.3 - brevity_penalty: 1.0 - ref_len: 2218.0 - src_name: Bulgarian - tgt_name: German - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: de - prefer_old: False - long_pair: bul-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ber-fr
Helsinki-NLP
2023-08-16T11:26:12Z
133
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ber", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ber-fr * source languages: ber * target languages: fr * OPUS readme: [ber-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ber.fr | 60.2 | 0.754 |
Helsinki-NLP/opus-mt-ber-es
Helsinki-NLP
2023-08-16T11:26:11Z
119
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ber", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ber-es * source languages: ber * target languages: es * OPUS readme: [ber-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ber.es | 33.8 | 0.487 |
Helsinki-NLP/opus-mt-ber-en
Helsinki-NLP
2023-08-16T11:26:10Z
138
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ber", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ber-en * source languages: ber * target languages: en * OPUS readme: [ber-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ber.en | 37.3 | 0.566 |
Helsinki-NLP/opus-mt-bem-fi
Helsinki-NLP
2023-08-16T11:26:07Z
113
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bem", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bem-fi * source languages: bem * target languages: fi * OPUS readme: [bem-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.fi | 22.8 | 0.439 |
Helsinki-NLP/opus-mt-bcl-fr
Helsinki-NLP
2023-08-16T11:26:02Z
106
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bcl", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-bcl-fr * source languages: bcl * target languages: fr * OPUS readme: [bcl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bcl.fr | 35.0 | 0.527 |
Helsinki-NLP/opus-mt-bat-en
Helsinki-NLP
2023-08-16T11:25:57Z
134
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "lv", "bat", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - lt - lv - bat - en tags: - translation license: apache-2.0 --- ### bat-eng * source group: Baltic languages * target group: English * OPUS readme: [bat-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md) * model: transformer * source language(s): lav lit ltg prg_Latn sgs * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv-laveng.lav.eng | 27.5 | 0.566 | | newsdev2019-enlt-liteng.lit.eng | 27.8 | 0.557 | | newstest2017-enlv-laveng.lav.eng | 21.1 | 0.512 | | newstest2019-lten-liteng.lit.eng | 30.2 | 0.592 | | Tatoeba-test.lav-eng.lav.eng | 51.5 | 0.687 | | Tatoeba-test.lit-eng.lit.eng | 55.1 | 0.703 | | Tatoeba-test.multi.eng | 50.6 | 0.662 | | Tatoeba-test.prg-eng.prg.eng | 1.0 | 0.159 | | Tatoeba-test.sgs-eng.sgs.eng | 16.5 | 0.265 | ### System Info: - hf_name: bat-eng - source_languages: bat - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'lv', 'bat', 'en'] - src_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt - src_alpha3: bat - tgt_alpha3: eng - short_pair: bat-en - chrF2_score: 0.662 - bleu: 50.6 - brevity_penalty: 0.9890000000000001 - ref_len: 30772.0 - src_name: Baltic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: bat - tgt_alpha2: en - prefer_old: False - long_pair: bat-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-az-tr
Helsinki-NLP
2023-08-16T11:25:56Z
344
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "az", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - az - tr tags: - translation license: apache-2.0 --- ### aze-tur * source group: Azerbaijani * target group: Turkish * OPUS readme: [aze-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md) * model: transformer-align * source language(s): aze_Latn * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.aze.tur | 24.4 | 0.529 | ### System Info: - hf_name: aze-tur - source_languages: aze - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['az', 'tr'] - src_constituents: {'aze_Latn'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt - src_alpha3: aze - tgt_alpha3: tur - short_pair: az-tr - chrF2_score: 0.529 - bleu: 24.4 - brevity_penalty: 0.956 - ref_len: 5380.0 - src_name: Azerbaijani - tgt_name: Turkish - train_date: 2020-06-16 - src_alpha2: az - tgt_alpha2: tr - prefer_old: False - long_pair: aze-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ase-sv
Helsinki-NLP
2023-08-16T11:25:52Z
127
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ase", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ase-sv * source languages: ase * target languages: sv * OPUS readme: [ase-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.sv | 39.7 | 0.576 |
Helsinki-NLP/opus-mt-ase-en
Helsinki-NLP
2023-08-16T11:25:49Z
137
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ase", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ase-en * source languages: ase * target languages: en * OPUS readme: [ase-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.en | 99.5 | 0.997 |
Nextcloud-AI/opus-mt-ar-tr
Nextcloud-AI
2023-08-16T11:25:46Z
101
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:37:39Z
--- language: - ar - tr tags: - translation license: apache-2.0 --- ### ara-tur * source group: Arabic * target group: Turkish * OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md) * model: transformer * source language(s): apc_Latn ara ara_Latn arq_Latn * target language(s): tur * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.tur | 33.1 | 0.619 | ### System Info: - hf_name: ara-tur - source_languages: ara - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'tr'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: tur - short_pair: ar-tr - chrF2_score: 0.619 - bleu: 33.1 - brevity_penalty: 0.9570000000000001 - ref_len: 6949.0 - src_name: Arabic - tgt_name: Turkish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: tr - prefer_old: False - long_pair: ara-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-it
Helsinki-NLP
2023-08-16T11:25:43Z
250
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ar - it tags: - translation license: apache-2.0 --- ### ara-ita * source group: Arabic * target group: Italian * OPUS readme: [ara-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md) * model: transformer * source language(s): ara * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.ita | 44.2 | 0.658 | ### System Info: - hf_name: ara-ita - source_languages: ara - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'it'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: ita - short_pair: ar-it - chrF2_score: 0.6579999999999999 - bleu: 44.2 - brevity_penalty: 0.9890000000000001 - ref_len: 1495.0 - src_name: Arabic - tgt_name: Italian - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: it - prefer_old: False - long_pair: ara-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Nextcloud-AI/opus-mt-ar-it
Nextcloud-AI
2023-08-16T11:25:43Z
111
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:37:30Z
--- language: - ar - it tags: - translation license: apache-2.0 --- ### ara-ita * source group: Arabic * target group: Italian * OPUS readme: [ara-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md) * model: transformer * source language(s): ara * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.ita | 44.2 | 0.658 | ### System Info: - hf_name: ara-ita - source_languages: ara - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'it'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: ita - short_pair: ar-it - chrF2_score: 0.6579999999999999 - bleu: 44.2 - brevity_penalty: 0.9890000000000001 - ref_len: 1495.0 - src_name: Arabic - tgt_name: Italian - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: it - prefer_old: False - long_pair: ara-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Nextcloud-AI/opus-mt-ar-fr
Nextcloud-AI
2023-08-16T11:25:41Z
104
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:37:21Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ar-fr * source languages: ar * target languages: fr * OPUS readme: [ar-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ar.fr | 43.5 | 0.602 |
Nextcloud-AI/opus-mt-ar-es
Nextcloud-AI
2023-08-16T11:25:40Z
113
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:37:11Z
--- language: - ar - es tags: - translation license: apache-2.0 --- ### ara-spa * source group: Arabic * target group: Spanish * OPUS readme: [ara-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md) * model: transformer * source language(s): apc apc_Latn ara arq * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.spa | 46.0 | 0.641 | ### System Info: - hf_name: ara-spa - source_languages: ara - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'es'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: spa - short_pair: ar-es - chrF2_score: 0.6409999999999999 - bleu: 46.0 - brevity_penalty: 0.9620000000000001 - ref_len: 9708.0 - src_name: Arabic - tgt_name: Spanish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: es - prefer_old: False - long_pair: ara-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-eo
Helsinki-NLP
2023-08-16T11:25:37Z
132
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ar - eo tags: - translation license: apache-2.0 --- ### ara-epo * source group: Arabic * target group: Esperanto * OPUS readme: [ara-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md) * model: transformer-align * source language(s): apc apc_Latn ara arq arq_Latn arz * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.epo | 18.9 | 0.376 | ### System Info: - hf_name: ara-epo - source_languages: ara - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'eo'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt - src_alpha3: ara - tgt_alpha3: epo - short_pair: ar-eo - chrF2_score: 0.376 - bleu: 18.9 - brevity_penalty: 0.948 - ref_len: 4506.0 - src_name: Arabic - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: ar - tgt_alpha2: eo - prefer_old: False - long_pair: ara-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Nextcloud-AI/opus-mt-ar-de
Nextcloud-AI
2023-08-16T11:25:33Z
104
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-02-23T10:36:36Z
--- language: - ar - de tags: - translation license: apache-2.0 --- ### ara-deu * source group: Arabic * target group: German * OPUS readme: [ara-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md) * model: transformer-align * source language(s): afb apc ara ara_Latn arq arz * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.deu | 44.7 | 0.629 | ### System Info: - hf_name: ara-deu - source_languages: ara - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'de'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: deu - short_pair: ar-de - chrF2_score: 0.629 - bleu: 44.7 - brevity_penalty: 0.986 - ref_len: 8371.0 - src_name: Arabic - tgt_name: German - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: de - prefer_old: False - long_pair: ara-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-de
Helsinki-NLP
2023-08-16T11:25:33Z
757
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - ar - de tags: - translation license: apache-2.0 --- ### ara-deu * source group: Arabic * target group: German * OPUS readme: [ara-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md) * model: transformer-align * source language(s): afb apc ara ara_Latn arq arz * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.deu | 44.7 | 0.629 | ### System Info: - hf_name: ara-deu - source_languages: ara - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'de'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: deu - short_pair: ar-de - chrF2_score: 0.629 - bleu: 44.7 - brevity_penalty: 0.986 - ref_len: 8371.0 - src_name: Arabic - tgt_name: German - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: de - prefer_old: False - long_pair: ara-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-am-sv
Helsinki-NLP
2023-08-16T11:25:32Z
122
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "am", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-am-sv * source languages: am * target languages: sv * OPUS readme: [am-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/am-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.am.sv | 21.0 | 0.377 |
Helsinki-NLP/opus-mt-afa-en
Helsinki-NLP
2023-08-16T11:25:29Z
141
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "so", "ti", "am", "he", "mt", "ar", "afa", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - so - ti - am - he - mt - ar - afa - en tags: - translation license: apache-2.0 --- ### afa-eng * source group: Afro-Asiatic languages * target group: English * OPUS readme: [afa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md) * model: transformer * source language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.amh-eng.amh.eng | 35.9 | 0.550 | | Tatoeba-test.ara-eng.ara.eng | 36.6 | 0.543 | | Tatoeba-test.hau-eng.hau.eng | 11.9 | 0.327 | | Tatoeba-test.heb-eng.heb.eng | 42.7 | 0.591 | | Tatoeba-test.kab-eng.kab.eng | 4.3 | 0.213 | | Tatoeba-test.mlt-eng.mlt.eng | 44.3 | 0.618 | | Tatoeba-test.multi.eng | 27.1 | 0.464 | | Tatoeba-test.rif-eng.rif.eng | 3.5 | 0.141 | | Tatoeba-test.shy-eng.shy.eng | 0.6 | 0.125 | | Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 | | Tatoeba-test.tir-eng.tir.eng | 13.1 | 0.328 | ### System Info: - hf_name: afa-eng - source_languages: afa - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en'] - src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt - src_alpha3: afa - tgt_alpha3: eng - short_pair: afa-en - chrF2_score: 0.46399999999999997 - bleu: 27.1 - brevity_penalty: 1.0 - ref_len: 69373.0 - src_name: Afro-Asiatic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: afa - tgt_alpha2: en - prefer_old: False - long_pair: afa-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-af-sv
Helsinki-NLP
2023-08-16T11:25:27Z
116
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-af-sv * source languages: af * target languages: sv * OPUS readme: [af-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.sv | 40.4 | 0.599 |
Helsinki-NLP/opus-mt-af-nl
Helsinki-NLP
2023-08-16T11:25:25Z
144
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "nl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - af - nl tags: - translation license: apache-2.0 --- ### afr-nld * source group: Afrikaans * target group: Dutch * OPUS readme: [afr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md) * model: transformer-align * source language(s): afr * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.nld | 55.2 | 0.715 | ### System Info: - hf_name: afr-nld - source_languages: afr - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'nl'] - src_constituents: {'afr'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: nld - short_pair: af-nl - chrF2_score: 0.715 - bleu: 55.2 - brevity_penalty: 0.995 - ref_len: 6710.0 - src_name: Afrikaans - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: nl - prefer_old: False - long_pair: afr-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-af-fr
Helsinki-NLP
2023-08-16T11:25:24Z
129
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-af-fr * source languages: af * target languages: fr * OPUS readme: [af-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.fr | 35.3 | 0.543 |
Helsinki-NLP/opus-mt-af-es
Helsinki-NLP
2023-08-16T11:25:22Z
123
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - af - es tags: - translation license: apache-2.0 --- ### afr-spa * source group: Afrikaans * target group: Spanish * OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md) * model: transformer-align * source language(s): afr * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.spa | 49.9 | 0.680 | ### System Info: - hf_name: afr-spa - source_languages: afr - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'es'] - src_constituents: {'afr'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: spa - short_pair: af-es - chrF2_score: 0.68 - bleu: 49.9 - brevity_penalty: 1.0 - ref_len: 2783.0 - src_name: Afrikaans - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: es - prefer_old: False - long_pair: afr-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-af-en
Helsinki-NLP
2023-08-16T11:25:20Z
4,596
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-af-en * source languages: af * target languages: en * OPUS readme: [af-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.af.en | 60.8 | 0.736 |
Helsinki-NLP/opus-mt-af-de
Helsinki-NLP
2023-08-16T11:25:19Z
150
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-af-de * source languages: af * target languages: de * OPUS readme: [af-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-19.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.zip) * test set translations: [opus-2020-01-19.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.test.txt) * test set scores: [opus-2020-01-19.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.af.de | 48.6 | 0.681 |
Helsinki-NLP/opus-mt-aed-es
Helsinki-NLP
2023-08-16T11:25:17Z
116
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "aed", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-aed-es * source languages: aed * target languages: es * OPUS readme: [aed-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/aed-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.aed.es | 89.1 | 0.915 |
Helsinki-NLP/opus-mt-ROMANCE-en
Helsinki-NLP
2023-08-16T11:25:14Z
88,075
8
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "roa", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ROMANCE-en * source languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la * target languages: en * OPUS readme: [fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-01.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.zip) * test set translations: [opus-2020-04-01.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.test.txt) * test set scores: [opus-2020-04-01.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.en | 62.2 | 0.750 |
Hansaht/Text_classification_model_1_pytorch
Hansaht
2023-08-16T11:22:15Z
118
1
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-20T08:51:58Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: Text_classification_model_1_pytorch results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93292 language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Text_classification_model_1_pytorch This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3494 - Accuracy: 0.9329 ## Model description **Introduction:** In the realm of natural language processing and sentiment analysis, the utilization of pre-trained language models has proven to be highly effective. One such model is DistilBERT Uncased, a distilled and smaller version of the powerful BERT model. In this project, we explore the application of DistilBERT Uncased for text classification, specifically focusing on sentiment analysis using the IMDb dataset. **Model Overview:** Our text classification model is built upon the foundation of DistilBERT Uncased. This model, developed by Hugging Face, is a variant of BERT that retains much of BERT's effectiveness while being lighter and faster. DistilBERT retains the bidirectional attention mechanism and the masked language model pre-training objective of BERT. Our aim is to fine-tune this pre-trained model to accurately predict the sentiment of movie reviews as either positive or negative. ## Intended uses & limitations we've demonstrated the effectiveness of fine-tuning DistilBERT Uncased for text classification, specifically for sentiment analysis using the IMDb dataset. Our model showcases the power of transfer learning, allowing it to leverage pre-trained knowledge and adapt it to a specific task. The fine-tuned model can accurately classify movie reviews as positive or negative, paving the way for efficient sentiment analysis in various applications. ## Training and evaluation data **Dataset:** The IMDb dataset, a widely-used benchmark for sentiment analysis, consists of movie reviews labeled as positive or negative based on their sentiment. This dataset encompasses a wide range of reviews from IMDb, offering a diverse set of language patterns, tones, and opinions. By training our model on this dataset, we aim to enable it to learn the nuances of positive and negative sentiment expression. ## Training procedure ### Fine-Tuning Process: Fine-tuning the DistilBERT Uncased model for sentiment analysis involves adapting the pre-trained model to our specific task. This process entails: Data Preprocessing: The IMDb dataset is preprocessed, tokenized, and encoded into input features that DistilBERT Uncased can understand. These features include tokenized text and segment IDs, which differentiate between the actual text and padding tokens. Fine-Tuning Architecture: We attach a classification layer on top of DistilBERT's transformer layers. This additional layer learns to map the contextualized embeddings generated by DistilBERT to sentiment labels (positive or negative). Training: The model is trained using the training subset of the IMDb dataset. During training, the classification layer's weights are updated based on the model's predictions and the ground truth labels. We use cross-entropy loss as the optimization objective. Validation: The model's performance is evaluated on a separate validation subset of the IMDb dataset. This helps us monitor its learning progress and make adjustments if needed. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2336 | 1.0 | 1563 | 0.2718 | 0.903 | | 0.162 | 2.0 | 3126 | 0.2392 | 0.9277 | | 0.0971 | 3.0 | 4689 | 0.3191 | 0.9312 | | 0.0535 | 4.0 | 6252 | 0.3211 | 0.9334 | | 0.034 | 5.0 | 7815 | 0.3494 | 0.9329 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Vertti/TuumaPEFTExperiment
Vertti
2023-08-16T11:01:41Z
0
0
null
[ "region:us" ]
null
2023-08-16T08:11:40Z
### A completely useless adapter trained on nothing. --- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
mindreader/llama-recipe-7b-1epoch-8batch
mindreader
2023-08-16T10:52:06Z
4
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-08-15T06:37:14Z
--- library_name: peft --- !python llama-recipes/llama_finetuning.py \ --use_peft \ --num_epochs 1 \ --peft_method lora \ --run_validation false \ --quantization \ --dataset alpaca_dataset \ --model_name meta-llama/Llama-2-7b-chat-hf \ --save_model \ --save_optimizer \ --batch_size_training 8 \ --output_dir ./save ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
bhushan4401/xyz
bhushan4401
2023-08-16T10:45:48Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-16T10:33:43Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: bhushan4401/xyz results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bhushan4401/xyz This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5297 - Validation Loss: 0.2912 - Train Accuracy: 1.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.5297 | 0.2912 | 1.0 | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.4 - Tokenizers 0.13.3
ELggman/distilbert-base-uncased-finetuned-imdb
ELggman
2023-08-16T10:34:54Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-16T10:29:46Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6961 | 1.0 | 157 | 2.5442 | | 2.5696 | 2.0 | 314 | 2.4639 | | 2.5438 | 3.0 | 471 | 2.4252 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
CyberHarem/hestia_isitwrongtotrytopickupgirlsinadungeon
CyberHarem
2023-08-16T10:21:29Z
0
1
null
[ "art", "text-to-image", "dataset:CyberHarem/hestia_isitwrongtotrytopickupgirlsinadungeon", "license:mit", "region:us" ]
text-to-image
2023-08-16T10:15:43Z
--- license: mit datasets: - CyberHarem/hestia_isitwrongtotrytopickupgirlsinadungeon pipeline_tag: text-to-image tags: - art --- # Lora of hestia_isitwrongtotrytopickupgirlsinadungeon This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1500, you need to download `1500/hestia_isitwrongtotrytopickupgirlsinadungeon.pt` as the embedding and `1500/hestia_isitwrongtotrytopickupgirlsinadungeon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The trigger word is `hestia_isitwrongtotrytopickupgirlsinadungeon`.** These are available steps: | Steps | pattern_1 | pattern_2 | bikini | free | nude | Download | |--------:|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------------------------| | 1500 | ![pattern_1-1500](1500/previews/pattern_1.png) | ![pattern_2-1500](1500/previews/pattern_2.png) | ![bikini-1500](1500/previews/bikini.png) | ![free-1500](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 1400 | ![pattern_1-1400](1400/previews/pattern_1.png) | ![pattern_2-1400](1400/previews/pattern_2.png) | ![bikini-1400](1400/previews/bikini.png) | ![free-1400](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 1300 | ![pattern_1-1300](1300/previews/pattern_1.png) | ![pattern_2-1300](1300/previews/pattern_2.png) | ![bikini-1300](1300/previews/bikini.png) | ![free-1300](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 1200 | ![pattern_1-1200](1200/previews/pattern_1.png) | ![pattern_2-1200](1200/previews/pattern_2.png) | ![bikini-1200](1200/previews/bikini.png) | ![free-1200](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 1100 | ![pattern_1-1100](1100/previews/pattern_1.png) | ![pattern_2-1100](1100/previews/pattern_2.png) | ![bikini-1100](1100/previews/bikini.png) | ![free-1100](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 1000 | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![bikini-1000](1000/previews/bikini.png) | ![free-1000](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 900 | ![pattern_1-900](900/previews/pattern_1.png) | ![pattern_2-900](900/previews/pattern_2.png) | ![bikini-900](900/previews/bikini.png) | ![free-900](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 800 | ![pattern_1-800](800/previews/pattern_1.png) | ![pattern_2-800](800/previews/pattern_2.png) | ![bikini-800](800/previews/bikini.png) | ![free-800](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 700 | ![pattern_1-700](700/previews/pattern_1.png) | ![pattern_2-700](700/previews/pattern_2.png) | ![bikini-700](700/previews/bikini.png) | ![free-700](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 600 | ![pattern_1-600](600/previews/pattern_1.png) | ![pattern_2-600](600/previews/pattern_2.png) | ![bikini-600](600/previews/bikini.png) | ![free-600](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 500 | ![pattern_1-500](500/previews/pattern_1.png) | ![pattern_2-500](500/previews/pattern_2.png) | ![bikini-500](500/previews/bikini.png) | ![free-500](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 400 | ![pattern_1-400](400/previews/pattern_1.png) | ![pattern_2-400](400/previews/pattern_2.png) | ![bikini-400](400/previews/bikini.png) | ![free-400](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 300 | ![pattern_1-300](300/previews/pattern_1.png) | ![pattern_2-300](300/previews/pattern_2.png) | ![bikini-300](300/previews/bikini.png) | ![free-300](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 200 | ![pattern_1-200](200/previews/pattern_1.png) | ![pattern_2-200](200/previews/pattern_2.png) | ![bikini-200](200/previews/bikini.png) | ![free-200](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) | | 100 | ![pattern_1-100](100/previews/pattern_1.png) | ![pattern_2-100](100/previews/pattern_2.png) | ![bikini-100](100/previews/bikini.png) | ![free-100](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/hestia_isitwrongtotrytopickupgirlsinadungeon.zip) |
toastyfrosty/controlearth-sct
toastyfrosty
2023-08-16T10:20:34Z
4
0
diffusers
[ "diffusers", "license:apache-2.0", "region:us" ]
null
2023-08-14T06:59:31Z
--- license: apache-2.0 --- *(Note that this model is for comparison purposes only. A better performing model can be found [here](https://huggingface.co/tostyfrosty/controlearth).)* # Model description ControlNet model conditioned on OpenStreetMaps (OSM) to generate the corresponding satellite images. Trained on the region of Scotland. *To access the **better performing model** trained on the WorldImagery Clarity dataset, see [this model](https://huggingface.co/tostyfrosty/controlearth).* ## Dataset used for training The dataset used for the training procedure is the [WorldImagery dataset](https://www.arcgis.com/home/item.html?id=10df2279f9684e4a9f6a7f08febac2a9). This dataset is qualitatively worse than its predecessor [WorldImagery Clarity dataset](https://www.arcgis.com/home/item.html?id=ab399b847323487dba26809bf11ea91a). The code for the dataset construction can be accessed in https://github.com/tostyfrosty/map-sat. ![examples image](https://raw.githubusercontent.com/tostyfrosty/map-sat/main/imgs/examples-controlnet-sct.png)
AptaArkana/indonesian_toxic_classification
AptaArkana
2023-08-16T10:20:10Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-16T07:58:01Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: kata_kasar_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kata_kasar_test This model is a fine-tuned version of [indolem/indobertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0099 - Accuracy: 0.9963 - Precision: 0.9926 - Recall: 1.0 - F1: 0.9963 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.053 | 1.0 | 547 | 0.0215 | 0.9963 | 0.9944 | 0.9981 | 0.9963 | | 0.0043 | 2.0 | 1094 | 0.0099 | 0.9963 | 0.9926 | 1.0 | 0.9963 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cpu - Datasets 2.11.0 - Tokenizers 0.13.2
dantepalacio/ruLongT5-Large
dantepalacio
2023-08-16T10:13:07Z
80
0
transformers
[ "transformers", "pytorch", "longt5", "text2text-generation", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-14T08:56:57Z
--- language: - ru --- original model: agemagician/mlong-t5-tglobal-large adaptation guide: https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90 import torch from transformers import MT5Tokenizer, LongT5ForConditionalGeneration model_name = "dantepalacio/ruLongT5-Large" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = LongT5ForConditionalGeneration.from_pretrained(model_name, ignore_mismatched_sizes=True)
DrasticActions/gok-so-vits-svc-models
DrasticActions
2023-08-16T10:11:14Z
0
0
null
[ "license:cc", "region:us" ]
null
2023-08-15T12:54:36Z
--- license: cc --- # GOK so-vits-svc models ## How to use - Install https://github.com/voicepaw/so-vits-svc-fork - [Download the models](https://huggingface.co/DrasticActions/gok-so-vits-svc-models/tree/main/Models) from this repo - Open the svc GUI - Under Paths, set the model path to the specific "G_*.pth" file you want to use. - Set the config path to the config.json from the same model path folder. - The Input Audio should only be a (single) speaker's voice. For that, you can use https://ultimatevocalremover.com/ - To create the file, click "Infer"
manuu01/xtremedistil-l6-h256-uncased-nli
manuu01
2023-08-16T10:09:36Z
69
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "dataset:scitail", "dataset:multi_nli", "dataset:anli", "dataset:snli", "dataset:bias-amplified-splits/wanli", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-13T21:37:40Z
--- tags: - generated_from_keras_callback model-index: - name: xtremedistil-l6-h256-uncased-nli results: [] datasets: - scitail - multi_nli - anli - snli - bias-amplified-splits/wanli --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-nli The model base is [xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased). It has been fine-tuned on: [snli](https://huggingface.co/datasets/snli), [wanli](https://huggingface.co/datasets/alisawuffles/WANLI), [mnli](https://huggingface.co/datasets/multi_nli), [anli](https://huggingface.co/datasets/anli), [scitail](https://huggingface.co/datasets/scitail) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters ### Training results It achieved the following accuracy during training (on validation sets): SNLI: 87.90% MNLI: 82.27% ANLI_r3: 44.83% scitail: 91.02% ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.4 - Tokenizers 0.13.3
maroti/q-taxiv3
maroti
2023-08-16T09:54:03Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-16T09:54:01Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxiv3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.48 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="maroti/q-taxiv3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ailabturkiye/epha
ailabturkiye
2023-08-16T09:52:45Z
0
0
null
[ "music", "tr", "license:openrail", "region:us" ]
null
2023-08-16T09:48:53Z
--- license: openrail language: - tr tags: - music --- Epha'nın videosunun sesiyle oluşturulan ses modeli. Train benim tarafımdan yapılmıştır.
maroti/q-FrozenLake-v1-4x4-noSlippery
maroti
2023-08-16T09:51:38Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-16T09:51:35Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="maroti/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
huangyuyang/Qwen-7B-Chat-int8.flm
huangyuyang
2023-08-16T09:51:01Z
0
4
null
[ "license:apache-2.0", "region:us" ]
null
2023-08-16T09:06:35Z
--- license: apache-2.0 --- fastllm model for Qwen-7B-Chat-int8 Github address: https://github.com/ztxz16/fastllm
qgallouedec/tqc-PandaReach-v1-2232459529
qgallouedec
2023-08-16T09:43:21Z
5
0
stable-baselines3
[ "stable-baselines3", "PandaReach-v1", "deep-reinforcement-learning", "reinforcement-learning", "arxiv:2106.13687", "model-index", "region:us" ]
reinforcement-learning
2023-02-27T15:36:34Z
--- library_name: stable-baselines3 tags: - PandaReach-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: TQC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReach-v1 type: PandaReach-v1 metrics: - type: mean_reward value: -2.20 +/- 0.75 name: mean_reward verified: false --- # **TQC** Agent playing **PandaReach-v1** This is a trained model of a **TQC** agent playing **PandaReach-v1** (arxiv.org/abs/2106.13687) using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo tqc --env PandaReach-v1 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo tqc --env PandaReach-v1 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo tqc --env PandaReach-v1 -orga qgallouedec -f logs/ python -m rl_zoo3.enjoy --algo tqc --env PandaReach-v1 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo tqc --env PandaReach-v1 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo tqc --env PandaReach-v1 -f logs/ -orga qgallouedec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('buffer_size', 1000000), ('ent_coef', 'auto'), ('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'), ('gamma', 0.95), ('learning_rate', 0.001), ('learning_starts', 1000), ('n_timesteps', 20000.0), ('normalize', True), ('policy', 'MultiInputPolicy'), ('policy_kwargs', 'dict(net_arch=[64, 64], n_critics=1)'), ('replay_buffer_class', 'HerReplayBuffer'), ('replay_buffer_kwargs', "dict( online_sampling=True, goal_selection_strategy='future', " 'n_sampled_goal=4 )'), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ``` # Environment Arguments ```python {'render': True} ```
AXX1995/homarekittenv1
AXX1995
2023-08-16T09:35:21Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-16T09:10:48Z
--- license: creativeml-openrail-m ---
PhysHunter/whisper-tiny-en
PhysHunter
2023-08-16T09:16:59Z
85
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-16T08:05:42Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-en results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.3020257826887661 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-en This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.5262 - Wer Ortho: 0.3119 - Wer: 0.3020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.4823 | 3.57 | 50 | 0.5315 | 0.3202 | 0.3088 | | 0.1361 | 7.14 | 100 | 0.4843 | 0.3253 | 0.3161 | | 0.0563 | 10.71 | 150 | 0.5113 | 0.3106 | 0.3020 | | 0.0374 | 14.29 | 200 | 0.5262 | 0.3119 | 0.3020 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
fp16-guy/YesMix_2.0_fp16_cleaned
fp16-guy
2023-08-16T09:12:55Z
0
1
null
[ "text-to-image", "region:us" ]
text-to-image
2023-08-05T09:17:35Z
--- pipeline_tag: text-to-image --- 【Checkpoint】YesMix, but fp16/cleaned - smaller size, same result. ======== /// **[**original checkpoint link**](https://civitai.com/models/9139/checkpointyesmix)** *(all rights to the model belong to zakp)* --- *[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/yesmix%2020%2001%2020230805110225-111-CheckpointYesmix_v20-Euler%20a-6.png) *(1.99gb version)* *[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/yesmix%2020%2002%2020230805110340-111-CheckpointYesmix_v20-Euler%20a-6.png) *(1.83gb version - no vae)* *[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/CheckpointYesmix_v20%20inp%2001%2020230815215238-111-CheckpointYesmix_v20_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)* *[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/CheckpointYesmix_v20%20inp%2002%2020230816120951-111-CheckpointYesmix_v20_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
Old-Shatterhand/esm_fine_fluorescence
Old-Shatterhand
2023-08-16T09:08:34Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "esm", "text-classification", "protein", "classification", "fluorescence", "en", "dataset:proteinea/fluorescence", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-12T06:24:24Z
--- license: mit datasets: - proteinea/fluorescence language: - en metrics: - accuracy library_name: transformers pipeline_tag: text-classification tags: - protein - esm - classification - fluorescence ---
huangyuyang/Qwen-7B-Chat-int4.flm
huangyuyang
2023-08-16T09:04:11Z
0
3
null
[ "license:apache-2.0", "region:us" ]
null
2023-08-16T08:08:01Z
--- license: apache-2.0 --- fastllm model for Qweb-7B-Chat-int4 Github address: https://github.com/ztxz16/fastllm
harshit989/my_awesome_billsum_model
harshit989
2023-08-16T08:59:17Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:billsum", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-16T08:33:35Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: my_awesome_billsum_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: ca_test args: default metrics: - name: Rouge1 type: rouge value: 0.1416 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4956 - Rouge1: 0.1416 - Rouge2: 0.0491 - Rougel: 0.1176 - Rougelsum: 0.1175 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.7923 | 0.1292 | 0.0404 | 0.1095 | 0.1094 | 19.0 | | No log | 2.0 | 124 | 2.5788 | 0.1378 | 0.0491 | 0.1166 | 0.1165 | 19.0 | | No log | 3.0 | 186 | 2.5125 | 0.1409 | 0.0486 | 0.1174 | 0.1172 | 19.0 | | No log | 4.0 | 248 | 2.4956 | 0.1416 | 0.0491 | 0.1176 | 0.1175 | 19.0 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
phatpt/q-FrozenLake-v1-4x4-noSlippery
phatpt
2023-08-16T08:30:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-16T08:30:00Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="phatpt/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
linoyts/lora-trained-xl-colab-woman-5e-06-1000
linoyts
2023-08-16T08:30:30Z
0
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-16T06:27:57Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks woman tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - LinoyTsaban/lora-trained-xl-colab-woman-5e-06-1000 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
tulayturanmaku/bert2bert_law_summarization
tulayturanmaku
2023-08-16T08:03:17Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-16T07:37:48Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: bert2bert_law_summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert2bert_law_summarization This model is a fine-tuned version of [mrm8488/bert2bert_shared-turkish-summarization](https://huggingface.co/mrm8488/bert2bert_shared-turkish-summarization) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1184 - Rouge1: 0.6064 - Rouge2: 0.5608 - Rougel: 0.5828 - Rougelsum: 0.5836 - Gen Len: 63.2615 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.5546 | 1.0 | 520 | 1.2699 | 0.6047 | 0.5588 | 0.5795 | 0.5799 | 62.7038 | | 1.071 | 2.0 | 1040 | 1.1607 | 0.6075 | 0.5598 | 0.5814 | 0.5824 | 63.2269 | | 0.9101 | 3.0 | 1560 | 1.1268 | 0.6129 | 0.569 | 0.5884 | 0.5891 | 62.9654 | | 0.798 | 4.0 | 2080 | 1.1184 | 0.6064 | 0.5608 | 0.5828 | 0.5836 | 63.2615 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3