modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
SummerChiam/pond_image_classification_2
474b45a6cbb6edbba3a09081047477790dea5af7
2022-07-29T06:23:30.000Z
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index" ]
image-classification
false
SummerChiam
null
SummerChiam/pond_image_classification_2
12
null
transformers
10,900
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: pond_image_classification_2 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9974489808082581 --- # pond_image_classification_2 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Algae ![Algae](images/Algae.png) #### Boiling ![Boiling](images/Boiling.png) #### BoilingNight ![BoilingNight](images/BoilingNight.png) #### Normal ![Normal](images/Normal.png) #### NormalCement ![NormalCement](images/NormalCement.png) #### NormalNight ![NormalNight](images/NormalNight.png) #### NormalRain ![NormalRain](images/NormalRain.png)
Frikallo/out
31d82a202d500064fbfb87c79140850f705f4652
2022-07-29T08:29:57.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-generation
false
Frikallo
null
Frikallo/out
12
null
transformers
10,901
--- license: mit tags: - generated_from_trainer model-index: - name: out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # out This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001372 - train_batch_size: 1 - eval_batch_size: 8 - seed: 2370848220 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
rufimelo/Legal-BERTimbau-base
9871473bca33c3e3256761bfda1e565b8ec8c95a
2022-07-29T16:14:30.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
rufimelo
null
rufimelo/Legal-BERTimbau-base
12
null
transformers
10,902
--- license: mit ---
Akjder/DialoGPT-small-harrypotter
b8d3156e5a427a5eb86cc079380ebd89f2879676
2021-09-21T06:07:16.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Akjder
null
Akjder/DialoGPT-small-harrypotter
11
null
transformers
10,903
--- tags: - conversational --- # Harry Potter DialoGPT Model
Alireza1044/albert-base-v2-mnli
ce8224e1445a916d9d9b9f721bed8dad382f35f0
2021-07-27T21:10:33.000Z
[ "pytorch", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
false
Alireza1044
null
Alireza1044/albert-base-v2-mnli
11
null
transformers
10,904
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model_index: - name: mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metric: name: Accuracy type: accuracy value: 0.8500813669650122 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5383 - Accuracy: 0.8501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
Alireza1044/albert-base-v2-stsb
1fb5f36aefaf6c8d4c037b7648c182836497f6a0
2021-07-26T10:57:27.000Z
[ "pytorch", "tensorboard", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
false
Alireza1044
null
Alireza1044/albert-base-v2-stsb
11
null
transformers
10,905
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model_index: - name: stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metric: name: Spearmanr type: spearmanr value: 0.9050744778895732 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stsb This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.3978 - Pearson: 0.9090 - Spearmanr: 0.9051 - Combined Score: 0.9071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
Alstractor/distilbert-base-uncased-finetuned-cola
186e5e4e33fb7f948b8b9a39e0afe317b08ec5ca
2021-11-04T21:34:27.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Alstractor
null
Alstractor/distilbert-base-uncased-finetuned-cola
11
null
transformers
10,906
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5343023846000738 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7272 - Matthews Correlation: 0.5343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5219 | 1.0 | 535 | 0.5340 | 0.4215 | | 0.3467 | 2.0 | 1070 | 0.5131 | 0.5181 | | 0.2331 | 3.0 | 1605 | 0.6406 | 0.5040 | | 0.1695 | 4.0 | 2140 | 0.7272 | 0.5343 | | 0.1212 | 5.0 | 2675 | 0.8399 | 0.5230 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
AndrewMcDowell/wav2vec2-xls-r-300m-german-de
e6e17dd3843b4bbcc219a28fc0a8efb655f396ec
2022-03-23T18:35:11.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
AndrewMcDowell
null
AndrewMcDowell/wav2vec2-xls-r-300m-german-de
11
2
transformers
10,907
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_7_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: XLS-R-300M - German results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: de metrics: - name: Test WER type: wer value: 20.16 - name: Test CER type: cer value: 5.06 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: de metrics: - name: Test WER type: wer value: 39.79 - name: Test CER type: cer value: 15.02 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: de metrics: - name: Test WER type: wer value: 47.95 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. eval results: WER: 0.20161578657865786 CER: 0.05062357805269733 --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1768 - Wer: 0.2016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.7531 | 0.04 | 500 | 5.4564 | 1.0 | | 2.9882 | 0.08 | 1000 | 3.0041 | 1.0 | | 2.1953 | 0.13 | 1500 | 1.1723 | 0.7121 | | 1.2406 | 0.17 | 2000 | 0.3656 | 0.3623 | | 1.1294 | 0.21 | 2500 | 0.2843 | 0.2926 | | 1.0731 | 0.25 | 3000 | 0.2554 | 0.2664 | | 1.051 | 0.3 | 3500 | 0.2387 | 0.2535 | | 1.0479 | 0.34 | 4000 | 0.2345 | 0.2512 | | 1.0026 | 0.38 | 4500 | 0.2270 | 0.2452 | | 0.9921 | 0.42 | 5000 | 0.2212 | 0.2353 | | 0.9839 | 0.47 | 5500 | 0.2141 | 0.2330 | | 0.9907 | 0.51 | 6000 | 0.2122 | 0.2334 | | 0.9788 | 0.55 | 6500 | 0.2114 | 0.2270 | | 0.9687 | 0.59 | 7000 | 0.2066 | 0.2323 | | 0.9777 | 0.64 | 7500 | 0.2033 | 0.2237 | | 0.9476 | 0.68 | 8000 | 0.2020 | 0.2194 | | 0.9625 | 0.72 | 8500 | 0.1977 | 0.2191 | | 0.9497 | 0.76 | 9000 | 0.1976 | 0.2175 | | 0.9781 | 0.81 | 9500 | 0.1956 | 0.2159 | | 0.9552 | 0.85 | 10000 | 0.1958 | 0.2191 | | 0.9345 | 0.89 | 10500 | 0.1964 | 0.2158 | | 0.9528 | 0.93 | 11000 | 0.1926 | 0.2154 | | 0.9502 | 0.98 | 11500 | 0.1953 | 0.2149 | | 0.9358 | 1.02 | 12000 | 0.1927 | 0.2167 | | 0.941 | 1.06 | 12500 | 0.1901 | 0.2115 | | 0.9287 | 1.1 | 13000 | 0.1936 | 0.2090 | | 0.9491 | 1.15 | 13500 | 0.1900 | 0.2104 | | 0.9478 | 1.19 | 14000 | 0.1931 | 0.2120 | | 0.946 | 1.23 | 14500 | 0.1914 | 0.2134 | | 0.9499 | 1.27 | 15000 | 0.1931 | 0.2173 | | 0.9346 | 1.32 | 15500 | 0.1913 | 0.2105 | | 0.9509 | 1.36 | 16000 | 0.1902 | 0.2137 | | 0.9294 | 1.4 | 16500 | 0.1895 | 0.2086 | | 0.9418 | 1.44 | 17000 | 0.1913 | 0.2183 | | 0.9302 | 1.49 | 17500 | 0.1884 | 0.2114 | | 0.9418 | 1.53 | 18000 | 0.1894 | 0.2108 | | 0.9363 | 1.57 | 18500 | 0.1886 | 0.2132 | | 0.9338 | 1.61 | 19000 | 0.1856 | 0.2078 | | 0.9185 | 1.66 | 19500 | 0.1852 | 0.2056 | | 0.9216 | 1.7 | 20000 | 0.1874 | 0.2095 | | 0.9176 | 1.74 | 20500 | 0.1873 | 0.2078 | | 0.9288 | 1.78 | 21000 | 0.1865 | 0.2097 | | 0.9278 | 1.83 | 21500 | 0.1869 | 0.2100 | | 0.9295 | 1.87 | 22000 | 0.1878 | 0.2095 | | 0.9221 | 1.91 | 22500 | 0.1852 | 0.2121 | | 0.924 | 1.95 | 23000 | 0.1855 | 0.2042 | | 0.9104 | 2.0 | 23500 | 0.1858 | 0.2105 | | 0.9284 | 2.04 | 24000 | 0.1850 | 0.2080 | | 0.9162 | 2.08 | 24500 | 0.1839 | 0.2045 | | 0.9111 | 2.12 | 25000 | 0.1838 | 0.2080 | | 0.91 | 2.17 | 25500 | 0.1889 | 0.2106 | | 0.9152 | 2.21 | 26000 | 0.1856 | 0.2026 | | 0.9209 | 2.25 | 26500 | 0.1891 | 0.2133 | | 0.9094 | 2.29 | 27000 | 0.1857 | 0.2089 | | 0.9065 | 2.34 | 27500 | 0.1840 | 0.2052 | | 0.9156 | 2.38 | 28000 | 0.1833 | 0.2062 | | 0.8986 | 2.42 | 28500 | 0.1789 | 0.2001 | | 0.9045 | 2.46 | 29000 | 0.1769 | 0.2022 | | 0.9039 | 2.51 | 29500 | 0.1819 | 0.2073 | | 0.9145 | 2.55 | 30000 | 0.1828 | 0.2063 | | 0.9081 | 2.59 | 30500 | 0.1811 | 0.2049 | | 0.9252 | 2.63 | 31000 | 0.1833 | 0.2086 | | 0.8957 | 2.68 | 31500 | 0.1795 | 0.2083 | | 0.891 | 2.72 | 32000 | 0.1809 | 0.2058 | | 0.9023 | 2.76 | 32500 | 0.1812 | 0.2061 | | 0.8918 | 2.8 | 33000 | 0.1775 | 0.1997 | | 0.8852 | 2.85 | 33500 | 0.1790 | 0.1997 | | 0.8928 | 2.89 | 34000 | 0.1767 | 0.2013 | | 0.9079 | 2.93 | 34500 | 0.1735 | 0.1986 | | 0.9032 | 2.97 | 35000 | 0.1793 | 0.2024 | | 0.9018 | 3.02 | 35500 | 0.1778 | 0.2027 | | 0.8846 | 3.06 | 36000 | 0.1776 | 0.2046 | | 0.8848 | 3.1 | 36500 | 0.1812 | 0.2064 | | 0.9062 | 3.14 | 37000 | 0.1800 | 0.2018 | | 0.9011 | 3.19 | 37500 | 0.1783 | 0.2049 | | 0.8996 | 3.23 | 38000 | 0.1810 | 0.2036 | | 0.893 | 3.27 | 38500 | 0.1805 | 0.2056 | | 0.897 | 3.31 | 39000 | 0.1773 | 0.2035 | | 0.8992 | 3.36 | 39500 | 0.1804 | 0.2054 | | 0.8987 | 3.4 | 40000 | 0.1768 | 0.2016 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset mozilla-foundation/common_voice_7_0 --config de --split test --log_outputs ``` 2. To evaluate on test dev data ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
Aron/distilbert-base-uncased-finetuned-emotion
7e09eaf0dcaea74dcd36ad941fcc93e13f55d5fd
2022-02-23T10:34:14.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Aron
null
Aron/distilbert-base-uncased-finetuned-emotion
11
null
transformers
10,908
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.92 - name: F1 type: f1 value: 0.9201604193183255 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2295 - Accuracy: 0.92 - F1: 0.9202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8187 | 1.0 | 250 | 0.3137 | 0.902 | 0.8983 | | 0.2514 | 2.0 | 500 | 0.2295 | 0.92 | 0.9202 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
cc648b051333929ef52291982bfc852656c51849
2021-10-31T18:01:26.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Ayran
null
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
11
null
transformers
10,909
--- tags: - conversational --- #DialoGPT medium model (Harry Potter 1 through 4 plus 6)
BitanBiswas/mbert-bengali-ner-finetuned-ner
aac63e8421628df4ec11db7400406f1e84335572
2022-02-14T16:54:04.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
BitanBiswas
null
BitanBiswas/mbert-bengali-ner-finetuned-ner
11
null
transformers
10,910
Entry not found
CallumRai/HansardGPT2
f7d01bb2bafb914a7f315c272ec3b33f228f8372
2021-05-21T09:33:25.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
CallumRai
null
CallumRai/HansardGPT2
11
null
transformers
10,911
A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01 For more information see: https://github.com/CallumRai/Hansard/
ClaudeYang/awesome_fb_model
432124511482ab93d8469a5f7780d82fd10318dc
2021-11-15T10:29:01.000Z
[ "pytorch", "bart", "text-classification", "dataset:multi_nli", "transformers", "zero-shot-classification" ]
zero-shot-classification
false
ClaudeYang
null
ClaudeYang/awesome_fb_model
11
null
transformers
10,912
--- pipeline_tag: zero-shot-classification datasets: - multi_nli widget: - text: "ETH" candidate_labels: "Location & Address, Employment, Organizational, Name, Service, Studies, Science" hypothesis_template: "This is {}." --- ETH Zeroshot
Contrastive-Tension/BERT-Large-NLI-CT
264d3405d54241dfe71ca0d0971aa7e92883941c
2021-05-18T18:04:22.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Contrastive-Tension
null
Contrastive-Tension/BERT-Large-NLI-CT
11
null
transformers
10,913
Entry not found
DCU-NLP/electra-base-irish-cased-generator-v1
a7fbe12effe2daf8d519d6d2825e10523070dc37
2021-11-15T18:03:36.000Z
[ "pytorch", "electra", "fill-mask", "ga", "arxiv:2107.12930", "transformers", "irish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
DCU-NLP
null
DCU-NLP/electra-base-irish-cased-generator-v1
11
null
transformers
10,914
--- language: - ga license: apache-2.0 tags: - irish - electra widget: - text: "Ceoltóir [MASK] ab ea Johnny Cash." --- # gaELECTRA [gaELECTRA](https://arxiv.org/abs/2107.12930) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model. ### Limitations and bias Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations. ### BibTeX entry and citation info If you use this model in your research, please consider citing our paper: ``` @article{DBLP:journals/corr/abs-2107-12930, author = {James Barry and Joachim Wagner and Lauren Cassidy and Alan Cowap and Teresa Lynn and Abigail Walsh and M{\'{\i}}che{\'{a}}l J. {\'{O}} Meachair and Jennifer Foster}, title = {gaBERT - an Irish Language Model}, journal = {CoRR}, volume = {abs/2107.12930}, year = {2021}, url = {https://arxiv.org/abs/2107.12930}, archivePrefix = {arXiv}, eprint = {2107.12930}, timestamp = {Fri, 30 Jul 2021 13:03:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-12930.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
DSI/personal_sentiment
1f10be4fb1420b1cf0efee0a9dca29ac7d47abdd
2021-11-13T18:51:22.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
DSI
null
DSI/personal_sentiment
11
null
transformers
10,915
Entry not found
Davlan/naija-twitter-sentiment-afriberta-large
3b9fda62f667a5930d9d2fc73a3ef65ba8564526
2022-06-27T11:50:40.000Z
[ "pytorch", "tf", "xlm-roberta", "text-classification", "hau", "ibo", "pcm", "yor", "multilingual", "arxiv:2201.08277", "transformers" ]
text-classification
false
Davlan
null
Davlan/naija-twitter-sentiment-afriberta-large
11
1
transformers
10,916
Hugging Face's logo --- language: - hau - ibo - pcm - yor - multilingual --- # naija-twitter-sentiment-afriberta-large ## Model description **naija-twitter-sentiment-afriberta-large** is the first multilingual twitter **sentiment classification** model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model. It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [NaijaSenti corpus](https://github.com/hausanlp/NaijaSenti). The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from [NaijaSenti](https://github.com/hausanlp/NaijaSenti) dataset. ## Intended uses & limitations #### How to use You can use this model with Transformers for Sentiment Classification. ```python from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax MODEL = "Davlan/naija-twitter-sentiment-afriberta-large" tokenizer = AutoTokenizer.from_pretrained(MODEL) # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) text = "I like you" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) id2label = {0:"positive", 1:"neutral", 2:"negative"} ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = id2label[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` #### Limitations and bias This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains. ## Training procedure This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the [original NaijaSenti paper](https://arxiv.org/abs/2201.08277). ## Eval results on Test set (F-score), average over 5 runs. language|F1-score -|- hau |81.2 ibo |80.8 pcm |74.5 yor |80.4 ### BibTeX entry and citation info ``` @inproceedings{Muhammad2022NaijaSentiAN, title={NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis}, author={Shamsuddeen Hassan Muhammad and David Ifeoluwa Adelani and Sebastian Ruder and Ibrahim Said Ahmad and Idris Abdulmumin and Bello Shehu Bello and Monojit Choudhury and Chris C. Emezue and Saheed Salahudeen Abdullahi and Anuoluwapo Aremu and Alipio Jeorge and Pavel B. Brazdil}, year={2022} } ```
DeadBeast/emoBERTTamil
60820db97992bedb7055e46570667d3178135467
2021-08-22T15:46:05.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:tamilmixsentiment", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
false
DeadBeast
null
DeadBeast/emoBERTTamil
11
2
transformers
10,917
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tamilmixsentiment metrics: - accuracy model_index: - name: emoBERTTamil results: - task: name: Text Classification type: text-classification dataset: name: tamilmixsentiment type: tamilmixsentiment args: default metric: name: Accuracy type: accuracy value: 0.671 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emoBERTTamil This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tamilmixsentiment dataset. It achieves the following results on the evaluation set: - Loss: 0.9666 - Accuracy: 0.671 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1128 | 1.0 | 250 | 1.0290 | 0.672 | | 1.0226 | 2.0 | 500 | 1.0172 | 0.686 | | 0.9137 | 3.0 | 750 | 0.9666 | 0.671 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_test
3bb793c9a8488ce7fa40dc6baf7d0aa4d895866d
2021-10-20T06:22:41.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
Edomonndo
null
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_test
11
null
transformers
10,918
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model_index: - name: opus-mt-ja-en-finetuned-ja-to-en_test results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation metric: name: Bleu type: bleu value: 80.2723 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ja-en-finetuned-ja-to-en_test This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.4737 - Bleu: 80.2723 - Gen Len: 16.5492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.1237 | 1.0 | 247 | 0.6131 | 60.9383 | 16.4152 | | 0.5395 | 2.0 | 494 | 0.5274 | 67.5705 | 16.2883 | | 0.3584 | 3.0 | 741 | 0.5122 | 71.3098 | 16.3777 | | 0.2563 | 4.0 | 988 | 0.4887 | 73.6639 | 16.401 | | 0.138 | 5.0 | 1235 | 0.4796 | 76.7942 | 16.4873 | | 0.0979 | 6.0 | 1482 | 0.4849 | 76.9404 | 16.6162 | | 0.0792 | 7.0 | 1729 | 0.4806 | 78.9831 | 16.5442 | | 0.0569 | 8.0 | 1976 | 0.4765 | 79.3461 | 16.4873 | | 0.0299 | 9.0 | 2223 | 0.4751 | 79.7901 | 16.4863 | | 0.0204 | 10.0 | 2470 | 0.4737 | 80.2723 | 16.5492 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu111 - Datasets 1.10.2 - Tokenizers 0.10.3
EhsanYB/distilbert-finetuned-ner
371f93580b3932f62207c5bf67a1bae9639c033f
2022-01-14T10:09:06.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
EhsanYB
null
EhsanYB/distilbert-finetuned-ner
11
null
transformers
10,919
Entry not found
Evgeneus/distilbert-base-uncased-finetuned-ner
7e4b1dab0a02decf8bc0e45d8e0c469c888f6a3c
2021-12-13T11:57:39.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
Evgeneus
null
Evgeneus/distilbert-base-uncased-finetuned-ner
11
null
transformers
10,920
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.875445994161531 - name: Recall type: recall value: 0.9058060185703098 - name: F1 type: f1 value: 0.8903672751264571 - name: Accuracy type: accuracy value: 0.9763292928971993 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0845 - Precision: 0.8754 - Recall: 0.9058 - F1: 0.8904 - Accuracy: 0.9763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2529 | 1.0 | 878 | 0.0845 | 0.8754 | 0.9058 | 0.8904 | 0.9763 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Galuh/id-journal-gpt2
66ed8c7923fb9dce2897b78bbc81b07abb1d9ecd
2021-08-01T14:07:43.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "id", "transformers" ]
text-generation
false
Galuh
null
Galuh/id-journal-gpt2
11
1
transformers
10,921
--- language: id widget: - text: "Penelitian ini bertujuan untuk menentukan identitas invertebrata laut dari Perairan Papua dengan teknik DNA barcoding" --- # Indonesian GPT-2 finetuned on Indonesian academic journals This is the [Indonesian gpt2-small model](https://huggingface.co/flax-community/gpt2-small-indonesian) fine-tuned to abstracts of Indonesian academic journals. All training was done on a TPUv2-8 VM sponsored by [TPU Research Cloud](https://sites.research.google/trc/). The demo can be found [here](https://huggingface.co/spaces/flax-community/gpt2-indonesian). ## How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='Galuh/id-journal-gpt2') >>> set_seed(42) >>> generator("Penelitian ini menggunakan teknik DNA barcoding untuk", max_length=30, num_return_sequences=5) [{'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk mendeteksi perubahan genetik bakteri pada udang windu. Empat tahap telah dilakukan, meliputi preparasi media untuk larva,'}, {'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk identifikasi gen pengasil flavonoid. Data yang diperoleh dari hasil PCR diidentifikasi dengan teknik sekuensing'}, {'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk mengekstraksi fragmen DNA dari sampel kulit buaya dan tulang anjing, di mana proses ini melibatkan karakterisasi enzim yang'}, {'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk melakukan transformasi. Tahapan transformasi meliputi seleksi sel dengan urutan (2, 8, 16,..., 18) dan'}, {'generated_text': 'Penelitian ini menggunakan teknik DNA barcoding untuk amplifikasi genom DNA dengan menggunakan primer TG8226 dan TG806. Metode pol'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Galuh/id-journal-gpt2') model = GPT2Model.from_pretrained('Galuh/id-journal-gpt2') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('Galuh/id-journal-gpt2') model = TFGPT2Model.from_pretrained('Galuh/id-journal-gpt2') text = "Ubah dengan teks apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Limitations and bias This model is originally the [Indonesian gpt2-small model](https://huggingface.co/flax-community/gpt2-small-indonesian), thus this model is also subject to the same [limitations and bias as the original model](https://huggingface.co/flax-community/gpt2-small-indonesian#limitations-and-bias). More detailed bias and analysis on this specific model is coming soon. ## Training data The model was trained on a dataset of Indonesian journals. We only trained this model on the abstracts. We extract the abstract by writing a script to find any text that is located between the word "Abstrak" (abstract) and "Kata kunci" (keywords). The extraction script can be found [here](https://github.com/galuhsahid/id-journal-gpt2/). To separate each abstract, we also add an end of text token (`<|endoftext|>`) between each abstract. The information of the sub-dataset and the distribution of the training and evaluation dataset are as follows: | split | count | percentage | | ---------- | ---------- | -------------- | | train | 146,248 | 90% | | validation | 16,250 | 10% | ## Training procedure The model was trained on a TPUv2-8 VM provided by [TPU Research Cloud](https://sites.research.google/trc/). The training duration was `2h 30m 57s`. ### Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | dataset | train loss | eval loss | eval perplexity | | ---------- | ---------- | -------------- | ---------- | | Indonesian journals dataset (abstract only) | 2.913 | 2.855 | 17.37 | ### Tracking The training process was tracked in [TensorBoard](https://huggingface.co/Galuh/id-journal-gpt2/tensorboard).
Geotrend/bert-base-it-cased
ca5125655bba48f7e5a9ca38bc8e79995440f6bf
2021-05-18T19:58:28.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "it", "dataset:wikipedia", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Geotrend
null
Geotrend/bert-base-it-cased
11
null
transformers
10,922
--- language: it datasets: wikipedia license: apache-2.0 --- # bert-base-it-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-it-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-it-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
Geotrend/bert-base-pl-cased
a78a67d5438ae11233f1af768f474221e3a1f855
2021-05-18T20:05:45.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "pl", "dataset:wikipedia", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Geotrend
null
Geotrend/bert-base-pl-cased
11
null
transformers
10,923
--- language: pl datasets: wikipedia license: apache-2.0 --- # bert-base-pl-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-pl-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-pl-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
Geotrend/distilbert-base-en-es-cased
0ac1cfe71b07fc80a7b2f18c055da1ece86c5f13
2021-08-16T13:58:36.000Z
[ "pytorch", "distilbert", "fill-mask", "multilingual", "dataset:wikipedia", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Geotrend
null
Geotrend/distilbert-base-en-es-cased
11
null
transformers
10,924
--- language: multilingual datasets: wikipedia license: apache-2.0 --- # distilbert-base-en-es-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-es-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-es-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
Geotrend/distilbert-base-en-nl-cased
6d0e35d3576d4ac1e3b97f67e0bbb8d2b6fccc5c
2021-07-27T10:22:30.000Z
[ "pytorch", "distilbert", "fill-mask", "multilingual", "dataset:wikipedia", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
Geotrend
null
Geotrend/distilbert-base-en-nl-cased
11
null
transformers
10,925
--- language: multilingual datasets: wikipedia license: apache-2.0 --- # distilbert-base-en-nl-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-nl-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-nl-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
Helsinki-NLP/opus-mt-ase-de
8df440a31f6a5af4aa0f9512140373a0ee8eed3d
2021-09-09T21:26:23.000Z
[ "pytorch", "marian", "text2text-generation", "ase", "de", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ase-de
11
null
transformers
10,926
--- tags: - translation license: apache-2.0 --- ### opus-mt-ase-de * source languages: ase * target languages: de * OPUS readme: [ase-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.de | 27.2 | 0.478 |
Helsinki-NLP/opus-mt-bnt-en
daa6f431ef85c1d5923fd6a7e3bcf85dc0ea1dc2
2021-01-18T07:52:00.000Z
[ "pytorch", "marian", "text2text-generation", "sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-bnt-en
11
null
transformers
10,927
--- language: - sn - zu - rw - lg - ts - ln - ny - xh - rn - bnt - en tags: - translation license: apache-2.0 --- ### bnt-eng * source group: Bantu languages * target group: English * OPUS readme: [bnt-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md) * model: transformer * source language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kin-eng.kin.eng | 31.7 | 0.481 | | Tatoeba-test.lin-eng.lin.eng | 8.3 | 0.271 | | Tatoeba-test.lug-eng.lug.eng | 5.3 | 0.128 | | Tatoeba-test.multi.eng | 23.1 | 0.394 | | Tatoeba-test.nya-eng.nya.eng | 38.3 | 0.527 | | Tatoeba-test.run-eng.run.eng | 26.6 | 0.431 | | Tatoeba-test.sna-eng.sna.eng | 27.5 | 0.440 | | Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.195 | | Tatoeba-test.toi-eng.toi.eng | 16.2 | 0.342 | | Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 | | Tatoeba-test.umb-eng.umb.eng | 8.4 | 0.231 | | Tatoeba-test.xho-eng.xho.eng | 37.2 | 0.554 | | Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.576 | ### System Info: - hf_name: bnt-eng - source_languages: bnt - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en'] - src_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt - src_alpha3: bnt - tgt_alpha3: eng - short_pair: bnt-en - chrF2_score: 0.39399999999999996 - bleu: 23.1 - brevity_penalty: 1.0 - ref_len: 14565.0 - src_name: Bantu languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: bnt - tgt_alpha2: en - prefer_old: False - long_pair: bnt-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ceb-fi
8c5cdaa45a8ef959061c6d97a7f118e2714725bc
2021-09-09T21:28:30.000Z
[ "pytorch", "marian", "text2text-generation", "ceb", "fi", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ceb-fi
11
null
transformers
10,928
--- tags: - translation license: apache-2.0 --- ### opus-mt-ceb-fi * source languages: ceb * target languages: fi * OPUS readme: [ceb-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ceb.fi | 27.4 | 0.525 |
Helsinki-NLP/opus-mt-cs-uk
358d8385f3eaf83363f2daf7ac81b21a7c9f827a
2021-01-18T07:56:10.000Z
[ "pytorch", "marian", "text2text-generation", "cs", "uk", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-cs-uk
11
null
transformers
10,929
--- language: - cs - uk tags: - translation license: apache-2.0 --- ### ces-ukr * source group: Czech * target group: Ukrainian * OPUS readme: [ces-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-ukr/README.md) * model: transformer-align * source language(s): ces * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ces.ukr | 50.9 | 0.680 | ### System Info: - hf_name: ces-ukr - source_languages: ces - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ces-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['cs', 'uk'] - src_constituents: {'ces'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ces-ukr/opus-2020-06-17.test.txt - src_alpha3: ces - tgt_alpha3: ukr - short_pair: cs-uk - chrF2_score: 0.68 - bleu: 50.9 - brevity_penalty: 0.9940000000000001 - ref_len: 8891.0 - src_name: Czech - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: cs - tgt_alpha2: uk - prefer_old: False - long_pair: ces-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-de-crs
b9de144126655b973cd8cf74a5651ac999e551a2
2021-09-09T21:30:25.000Z
[ "pytorch", "marian", "text2text-generation", "de", "crs", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-de-crs
11
null
transformers
10,930
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-crs * source languages: de * target languages: crs * OPUS readme: [de-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-crs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-crs/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-crs/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-crs/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.crs | 24.1 | 0.429 |
Helsinki-NLP/opus-mt-de-fj
596580a8225fb340357d25cd38639fed5d662681
2021-09-09T21:31:09.000Z
[ "pytorch", "marian", "text2text-generation", "de", "fj", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-de-fj
11
null
transformers
10,931
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-fj * source languages: de * target languages: fj * OPUS readme: [de-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.fj | 24.6 | 0.470 |
Helsinki-NLP/opus-mt-el-fr
b00ba91c42b2f20768228b179f01274048158001
2021-09-09T21:33:51.000Z
[ "pytorch", "marian", "text2text-generation", "el", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-el-fr
11
null
transformers
10,932
--- tags: - translation license: apache-2.0 --- ### opus-mt-el-fr * source languages: el * target languages: fr * OPUS readme: [el-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.el.fr | 63.0 | 0.741 |
Helsinki-NLP/opus-mt-en-bcl
fdda7e146d903da0f4da8895800c52bdcfa07ecc
2021-09-09T21:34:09.000Z
[ "pytorch", "marian", "text2text-generation", "en", "bcl", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-bcl
11
null
transformers
10,933
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-bcl * source languages: en * target languages: bcl * OPUS readme: [en-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bcl/README.md) * dataset: opus+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.zip) * test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.test.txt) * test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bcl | 54.3 | 0.722 |
Helsinki-NLP/opus-mt-en-ho
12bad640564ae34b349fb0ac28a52995c7e17c2d
2021-09-09T21:35:57.000Z
[ "pytorch", "marian", "text2text-generation", "en", "ho", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-ho
11
null
transformers
10,934
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-ho * source languages: en * target languages: ho * OPUS readme: [en-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ho/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ho | 33.9 | 0.563 |
Helsinki-NLP/opus-mt-en-kqn
d3adf1c5424a0a362c66279729717f57d76b027e
2021-09-09T21:36:41.000Z
[ "pytorch", "marian", "text2text-generation", "en", "kqn", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-kqn
11
null
transformers
10,935
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-kqn * source languages: en * target languages: kqn * OPUS readme: [en-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kqn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kqn | 33.1 | 0.567 |
Helsinki-NLP/opus-mt-en-lua
0164f9af18272b0b05a777f33f0f822fa09af417
2021-09-09T21:37:07.000Z
[ "pytorch", "marian", "text2text-generation", "en", "lua", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-lua
11
null
transformers
10,936
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-lua * source languages: en * target languages: lua * OPUS readme: [en-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lua/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lua | 35.3 | 0.578 |
Helsinki-NLP/opus-mt-en-mfe
dc7d5d1502df1a435d053192fcc0dcfae16f76a5
2021-09-09T21:37:27.000Z
[ "pytorch", "marian", "text2text-generation", "en", "mfe", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-mfe
11
null
transformers
10,937
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-mfe * source languages: en * target languages: mfe * OPUS readme: [en-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mfe/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mfe | 32.1 | 0.509 |
Helsinki-NLP/opus-mt-en-mos
302e35c3f1fe631bb0bac15243a8770f6362b7ef
2021-09-09T21:37:47.000Z
[ "pytorch", "marian", "text2text-generation", "en", "mos", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-mos
11
null
transformers
10,938
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-mos * source languages: en * target languages: mos * OPUS readme: [en-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mos/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mos | 26.9 | 0.417 |
Helsinki-NLP/opus-mt-en-nyk
2efca81ef2453401aaa06cafe04aa00db56e6eb5
2021-09-09T21:38:17.000Z
[ "pytorch", "marian", "text2text-generation", "en", "nyk", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-nyk
11
null
transformers
10,939
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-nyk * source languages: en * target languages: nyk * OPUS readme: [en-nyk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nyk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.nyk | 26.6 | 0.511 |
Helsinki-NLP/opus-mt-en-pis
84d726e58202d97cfa040467e691cb532aee4000
2021-09-09T21:38:33.000Z
[ "pytorch", "marian", "text2text-generation", "en", "pis", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-pis
11
null
transformers
10,940
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-pis * source languages: en * target languages: pis * OPUS readme: [en-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pis/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.pis | 38.3 | 0.571 |
Helsinki-NLP/opus-mt-es-guw
a88acc7826825c4732675ed37998fee12b34754c
2021-09-09T21:42:38.000Z
[ "pytorch", "marian", "text2text-generation", "es", "guw", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-es-guw
11
null
transformers
10,941
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-guw * source languages: es * target languages: guw * OPUS readme: [es-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-guw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.guw | 28.6 | 0.480 |
Helsinki-NLP/opus-mt-es-st
7b5626bbf76ca489f6a248e7dacdda7f2caa73a9
2021-09-09T21:44:53.000Z
[ "pytorch", "marian", "text2text-generation", "es", "st", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-es-st
11
null
transformers
10,942
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-st * source languages: es * target languages: st * OPUS readme: [es-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-st/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-st/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-st/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-st/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.st | 35.5 | 0.556 |
Helsinki-NLP/opus-mt-es-tn
c7ceaa541f5dd1ec57c33e543625c3a201d75d72
2021-09-09T21:45:04.000Z
[ "pytorch", "marian", "text2text-generation", "es", "tn", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-es-tn
11
null
transformers
10,943
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-tn * source languages: es * target languages: tn * OPUS readme: [es-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.tn | 32.2 | 0.528 |
Helsinki-NLP/opus-mt-es-to
1ed93f73b0b2c780c8ab4e1d3495e84ac5bf6886
2021-09-09T21:45:08.000Z
[ "pytorch", "marian", "text2text-generation", "es", "to", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-es-to
11
null
transformers
10,944
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-to * source languages: es * target languages: to * OPUS readme: [es-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-to/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-to/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-to/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-to/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.to | 35.7 | 0.510 |
Helsinki-NLP/opus-mt-es-tw
2b7493bf5c0b2d63dd5043253f11893748c48fdd
2021-09-09T21:45:19.000Z
[ "pytorch", "marian", "text2text-generation", "es", "tw", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-es-tw
11
null
transformers
10,945
--- tags: - translation license: apache-2.0 --- ### opus-mt-es-tw * source languages: es * target languages: tw * OPUS readme: [es-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tw/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tw/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tw/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.es.tw | 26.3 | 0.465 |
Helsinki-NLP/opus-mt-fi-es
1d81789f89a9ada6c9a4b1cacd43bc6faab326a9
2021-09-09T21:47:24.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "es", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-es
11
null
transformers
10,946
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-es * source languages: fi * target languages: es * OPUS readme: [fi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-es/opus-2020-04-12.zip) * test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-es/opus-2020-04-12.test.txt) * test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-es/opus-2020-04-12.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fi.es | 51.5 | 0.700 |
Helsinki-NLP/opus-mt-fi-ig
4f565e8da888286ad8d6c9ee976bfa402f5b1e45
2021-09-09T21:48:32.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "ig", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-ig
11
null
transformers
10,947
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-ig * source languages: fi * target languages: ig * OPUS readme: [fi-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ig/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ig/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ig/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ig/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.ig | 28.5 | 0.456 |
Helsinki-NLP/opus-mt-fi-iso
e8ab4b0929ba118babb107935e74ae71f7d8ea36
2021-09-09T21:48:44.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "iso", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-iso
11
null
transformers
10,948
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-iso * source languages: fi * target languages: iso * OPUS readme: [fi-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-iso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-iso/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-iso/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-iso/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.iso | 26.0 | 0.439 |
Helsinki-NLP/opus-mt-fi-mh
1ee9917dfe5dcb800f1cb71a9494fd5028007a3e
2021-09-09T21:49:35.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "mh", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-mh
11
null
transformers
10,949
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-mh * source languages: fi * target languages: mh * OPUS readme: [fi-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-mh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-mh/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mh/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mh/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.mh | 20.8 | 0.404 |
Helsinki-NLP/opus-mt-fi-niu
89393797459f828fd2ca0a511409ab580a580f84
2021-09-09T21:49:51.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "niu", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-niu
11
null
transformers
10,950
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-niu * source languages: fi * target languages: niu * OPUS readme: [fi-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-niu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.niu | 35.3 | 0.565 |
Helsinki-NLP/opus-mt-fi-pap
4ead35b0f9dc656671fa9837c8274a0373e0c48f
2021-09-09T21:50:09.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "pap", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-pap
11
null
transformers
10,951
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-pap * source languages: fi * target languages: pap * OPUS readme: [fi-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pap/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pap/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pap/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.pap | 27.3 | 0.478 |
Helsinki-NLP/opus-mt-fi-sg
3d3ff9f491d8e9a7362eb30ca278d6e409d3f586
2021-09-09T21:50:35.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "sg", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-sg
11
null
transformers
10,952
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-sg * source languages: fi * target languages: sg * OPUS readme: [fi-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sg | 29.3 | 0.480 |
Helsinki-NLP/opus-mt-fi-wls
9a78ab2df267ffa3fd7fc634f8f5ea117fb22a86
2021-09-09T21:52:13.000Z
[ "pytorch", "marian", "text2text-generation", "fi", "wls", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fi-wls
11
null
transformers
10,953
--- tags: - translation license: apache-2.0 --- ### opus-mt-fi-wls * source languages: fi * target languages: wls * OPUS readme: [fi-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-wls/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.wls | 24.7 | 0.466 |
Helsinki-NLP/opus-mt-fr-ase
710d58cad603c9fbb4cad06f79152dc0e5f0243d
2021-09-09T21:52:48.000Z
[ "pytorch", "marian", "text2text-generation", "fr", "ase", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fr-ase
11
null
transformers
10,954
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-ase * source languages: fr * target languages: ase * OPUS readme: [fr-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ase/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ase | 38.5 | 0.545 |
Helsinki-NLP/opus-mt-fr-pon
1fd95877d97a9b9a5a31c17dc1901f9b275bb184
2021-09-09T21:56:15.000Z
[ "pytorch", "marian", "text2text-generation", "fr", "pon", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fr-pon
11
null
transformers
10,955
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-pon * source languages: fr * target languages: pon * OPUS readme: [fr-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pon/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.pon | 23.9 | 0.458 |
Helsinki-NLP/opus-mt-fr-sn
d6affd8f83d3a7ddf349cdda4947c666ba110d4a
2021-09-09T21:56:53.000Z
[ "pytorch", "marian", "text2text-generation", "fr", "sn", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fr-sn
11
null
transformers
10,956
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-sn * source languages: fr * target languages: sn * OPUS readme: [fr-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.sn | 23.4 | 0.507 |
Helsinki-NLP/opus-mt-fr-sv
d1c07247b8c983426342076b3bd3e29776d7723b
2021-09-09T21:57:06.000Z
[ "pytorch", "marian", "text2text-generation", "fr", "sv", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fr-sv
11
null
transformers
10,957
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-sv * source languages: fr * target languages: sv * OPUS readme: [fr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.sv | 60.1 | 0.744 |
Helsinki-NLP/opus-mt-fr-tll
364625c4114eb5592a0b94a948b573d3eda9a71f
2021-09-09T21:57:19.000Z
[ "pytorch", "marian", "text2text-generation", "fr", "tll", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fr-tll
11
null
transformers
10,958
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-tll * source languages: fr * target languages: tll * OPUS readme: [fr-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tll/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.tll | 24.6 | 0.467 |
Helsinki-NLP/opus-mt-fr-uk
e7b16437d9bb57b6510636de109b9c9ef9e2088a
2021-09-09T21:57:59.000Z
[ "pytorch", "marian", "text2text-generation", "fr", "uk", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fr-uk
11
null
transformers
10,959
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-uk * source languages: fr * target languages: uk * OPUS readme: [fr-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-uk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.uk | 39.4 | 0.581 |
Helsinki-NLP/opus-mt-hu-sv
1153a336b2d0ba262e85298f73c8e906879cbb6e
2021-09-09T22:11:03.000Z
[ "pytorch", "marian", "text2text-generation", "hu", "sv", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-hu-sv
11
null
transformers
10,960
--- tags: - translation license: apache-2.0 --- ### opus-mt-hu-sv * source languages: hu * target languages: sv * OPUS readme: [hu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-sv/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-sv/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-sv/opus-2020-01-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.hu.sv | 52.6 | 0.686 |
Helsinki-NLP/opus-mt-inc-inc
e1be60fc72658b90bc708254047be2bb5518abab
2020-08-21T14:42:46.000Z
[ "pytorch", "marian", "text2text-generation", "bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-inc-inc
11
null
transformers
10,961
--- language: - bn - or - gu - mr - ur - hi - as - si - inc tags: - translation license: apache-2.0 --- ### inc-inc * source group: Indic languages * target group: Indic languages * OPUS readme: [inc-inc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/inc-inc/README.md) * model: transformer * source language(s): asm hin mar urd * target language(s): asm hin mar urd * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.asm-hin.asm.hin | 2.6 | 0.231 | | Tatoeba-test.hin-asm.hin.asm | 9.1 | 0.262 | | Tatoeba-test.hin-mar.hin.mar | 28.1 | 0.548 | | Tatoeba-test.hin-urd.hin.urd | 19.9 | 0.508 | | Tatoeba-test.mar-hin.mar.hin | 11.6 | 0.466 | | Tatoeba-test.multi.multi | 17.1 | 0.464 | | Tatoeba-test.urd-hin.urd.hin | 13.5 | 0.377 | ### System Info: - hf_name: inc-inc - source_languages: inc - target_languages: inc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/inc-inc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc'] - src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.test.txt - src_alpha3: inc - tgt_alpha3: inc - short_pair: inc-inc - chrF2_score: 0.46399999999999997 - bleu: 17.1 - brevity_penalty: 1.0 - ref_len: 4985.0 - src_name: Indic languages - tgt_name: Indic languages - train_date: 2020-07-27 - src_alpha2: inc - tgt_alpha2: inc - prefer_old: False - long_pair: inc-inc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-it-ca
be40b8f92044c88e5b45840eb706c3196a9da037
2020-08-21T14:42:46.000Z
[ "pytorch", "marian", "text2text-generation", "it", "ca", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-it-ca
11
null
transformers
10,962
--- language: - it - ca tags: - translation license: apache-2.0 --- ### ita-cat * source group: Italian * target group: Catalan * OPUS readme: [ita-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-cat/README.md) * model: transformer-align * source language(s): ita * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ita.cat | 52.5 | 0.706 | ### System Info: - hf_name: ita-cat - source_languages: ita - target_languages: cat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-cat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'ca'] - src_constituents: {'ita'} - tgt_constituents: {'cat'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.test.txt - src_alpha3: ita - tgt_alpha3: cat - short_pair: it-ca - chrF2_score: 0.706 - bleu: 52.5 - brevity_penalty: 0.993 - ref_len: 2074.0 - src_name: Italian - tgt_name: Catalan - train_date: 2020-06-16 - src_alpha2: it - tgt_alpha2: ca - prefer_old: False - long_pair: ita-cat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ja-fi
7f6dd2d58f7cf578f745e5377569ff9a495651ba
2021-09-10T13:53:20.000Z
[ "pytorch", "marian", "text2text-generation", "ja", "fi", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ja-fi
11
null
transformers
10,963
--- tags: - translation license: apache-2.0 --- ### opus-mt-ja-fi * source languages: ja * target languages: fi * OPUS readme: [ja-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.fi | 21.2 | 0.448 |
Helsinki-NLP/opus-mt-ja-hu
f75caa085f156b74c435bf097ff363a8bf2ef375
2020-08-21T14:42:47.000Z
[ "pytorch", "marian", "text2text-generation", "ja", "hu", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ja-hu
11
null
transformers
10,964
--- language: - ja - hu tags: - translation license: apache-2.0 --- ### jpn-hun * source group: Japanese * target group: Hungarian * OPUS readme: [jpn-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hun/README.md) * model: transformer-align * source language(s): jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Yiii * target language(s): hun * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.hun | 12.2 | 0.364 | ### System Info: - hf_name: jpn-hun - source_languages: jpn - target_languages: hun - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hun/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'hu'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'hun'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: hun - short_pair: ja-hu - chrF2_score: 0.364 - bleu: 12.2 - brevity_penalty: 1.0 - ref_len: 18625.0 - src_name: Japanese - tgt_name: Hungarian - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: hu - prefer_old: False - long_pair: jpn-hun - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ka-ru
549d06a80ecb3c9203d7ecf8eee396daf439daaf
2020-08-21T14:42:47.000Z
[ "pytorch", "marian", "text2text-generation", "ka", "ru", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ka-ru
11
null
transformers
10,965
--- language: - ka - ru tags: - translation license: apache-2.0 --- ### kat-rus * source group: Georgian * target group: Russian * OPUS readme: [kat-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-rus/README.md) * model: transformer-align * source language(s): kat * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kat.rus | 38.2 | 0.604 | ### System Info: - hf_name: kat-rus - source_languages: kat - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ka', 'ru'] - src_constituents: {'kat'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.test.txt - src_alpha3: kat - tgt_alpha3: rus - short_pair: ka-ru - chrF2_score: 0.604 - bleu: 38.2 - brevity_penalty: 0.996 - ref_len: 3899.0 - src_name: Georgian - tgt_name: Russian - train_date: 2020-06-16 - src_alpha2: ka - tgt_alpha2: ru - prefer_old: False - long_pair: kat-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ms-fr
7b7ca29930c9f9b9300a92cdd84e175fbada4865
2020-08-21T14:42:48.000Z
[ "pytorch", "marian", "text2text-generation", "ms", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ms-fr
11
null
transformers
10,966
--- language: - ms - fr tags: - translation license: apache-2.0 --- ### msa-fra * source group: Malay (macrolanguage) * target group: French * OPUS readme: [msa-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-fra/README.md) * model: transformer-align * source language(s): ind zsm_Latn * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa.fra | 43.7 | 0.609 | ### System Info: - hf_name: msa-fra - source_languages: msa - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ms', 'fr'] - src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.test.txt - src_alpha3: msa - tgt_alpha3: fra - short_pair: ms-fr - chrF2_score: 0.609 - bleu: 43.7 - brevity_penalty: 0.9740000000000001 - ref_len: 7808.0 - src_name: Malay (macrolanguage) - tgt_name: French - train_date: 2020-06-17 - src_alpha2: ms - tgt_alpha2: fr - prefer_old: False - long_pair: msa-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-no-sv
af2f7ad669d1a816ec723ae84376b4ffd2af8c34
2020-08-21T14:42:48.000Z
[ "pytorch", "marian", "text2text-generation", "no", "sv", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-no-sv
11
null
transformers
10,967
--- language: - no - sv tags: - translation license: apache-2.0 --- ### nor-swe * source group: Norwegian * target group: Swedish * OPUS readme: [nor-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md) * model: transformer-align * source language(s): nno nob * target language(s): swe * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.swe | 63.7 | 0.773 | ### System Info: - hf_name: nor-swe - source_languages: nor - target_languages: swe - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'sv'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'swe'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: swe - short_pair: no-sv - chrF2_score: 0.773 - bleu: 63.7 - brevity_penalty: 0.9670000000000001 - ref_len: 3672.0 - src_name: Norwegian - tgt_name: Swedish - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: sv - prefer_old: False - long_pair: nor-swe - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ro-sv
7fa2ef5f82b826ec17683cd65d864ffc52d2f9be
2021-09-10T14:02:14.000Z
[ "pytorch", "marian", "text2text-generation", "ro", "sv", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-ro-sv
11
null
transformers
10,968
--- tags: - translation license: apache-2.0 --- ### opus-mt-ro-sv * source languages: ro * target languages: sv * OPUS readme: [ro-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ro.sv | 31.2 | 0.529 |
Helsinki-NLP/opus-mt-sl-sv
3d0cfc54aed676928cca2594bb17d33960e1501b
2021-09-10T14:03:50.000Z
[ "pytorch", "marian", "text2text-generation", "sl", "sv", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sl-sv
11
null
transformers
10,969
--- tags: - translation license: apache-2.0 --- ### opus-mt-sl-sv * source languages: sl * target languages: sv * OPUS readme: [sl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sl.sv | 27.8 | 0.509 |
Helsinki-NLP/opus-mt-sn-es
e8445e4039e78d8a9a27ddcca2342717c4ef57e6
2021-09-10T14:04:08.000Z
[ "pytorch", "marian", "text2text-generation", "sn", "es", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sn-es
11
null
transformers
10,970
--- tags: - translation license: apache-2.0 --- ### opus-mt-sn-es * source languages: sn * target languages: es * OPUS readme: [sn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sn.es | 32.5 | 0.509 |
Helsinki-NLP/opus-mt-sq-es
201693b9a3cb4e58e89cc08b2f3cd0179eb5c4c6
2021-09-10T14:04:23.000Z
[ "pytorch", "marian", "text2text-generation", "sq", "es", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sq-es
11
null
transformers
10,971
--- tags: - translation license: apache-2.0 --- ### opus-mt-sq-es * source languages: sq * target languages: es * OPUS readme: [sq-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.sq.es | 23.9 | 0.510 |
Helsinki-NLP/opus-mt-sq-sv
71b15251bc2be502a4d0d14d68ba32caf8bceeb0
2021-09-10T14:04:27.000Z
[ "pytorch", "marian", "text2text-generation", "sq", "sv", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sq-sv
11
null
transformers
10,972
--- tags: - translation license: apache-2.0 --- ### opus-mt-sq-sv * source languages: sq * target languages: sv * OPUS readme: [sq-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sq.sv | 36.2 | 0.559 |
Helsinki-NLP/opus-mt-st-es
d09b48ce08d1187675cac6ecf8146e04876b6111
2021-09-10T14:04:58.000Z
[ "pytorch", "marian", "text2text-generation", "st", "es", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-st-es
11
null
transformers
10,973
--- tags: - translation license: apache-2.0 --- ### opus-mt-st-es * source languages: st * target languages: es * OPUS readme: [st-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.st.es | 31.3 | 0.499 |
Helsinki-NLP/opus-mt-st-fi
c4ad55e29b075da7f480d7d4c5d7a4531ea70561
2021-09-10T14:05:01.000Z
[ "pytorch", "marian", "text2text-generation", "st", "fi", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-st-fi
11
null
transformers
10,974
--- tags: - translation license: apache-2.0 --- ### opus-mt-st-fi * source languages: st * target languages: fi * OPUS readme: [st-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.st.fi | 28.8 | 0.520 |
Helsinki-NLP/opus-mt-sv-is
2a99829544782cb3a27594c4121ef5a049a8b1f8
2021-09-10T14:07:27.000Z
[ "pytorch", "marian", "text2text-generation", "sv", "is", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sv-is
11
null
transformers
10,975
--- tags: - translation license: apache-2.0 --- ### opus-mt-sv-is * source languages: sv * target languages: is * OPUS readme: [sv-is](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-is/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-is/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-is/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-is/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.is | 27.1 | 0.471 |
Helsinki-NLP/opus-mt-sv-toi
6d08c2ea49e26002ae43827819cfef1e8130fa08
2021-09-10T14:10:07.000Z
[ "pytorch", "marian", "text2text-generation", "sv", "toi", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sv-toi
11
null
transformers
10,976
--- tags: - translation license: apache-2.0 --- ### opus-mt-sv-toi * source languages: sv * target languages: toi * OPUS readme: [sv-toi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-toi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-toi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-toi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-toi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.toi | 23.2 | 0.512 |
Helsinki-NLP/opus-mt-sv-tum
e334319a27a17e358b9002c74a3ddac826d8206a
2021-09-10T14:10:18.000Z
[ "pytorch", "marian", "text2text-generation", "sv", "tum", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sv-tum
11
null
transformers
10,977
--- tags: - translation license: apache-2.0 --- ### opus-mt-sv-tum * source languages: sv * target languages: tum * OPUS readme: [sv-tum](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tum/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tum/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tum/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tum/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tum | 22.0 | 0.475 |
Helsinki-NLP/opus-mt-to-fr
eb0c363903adb05d20c510e2bae9761310c9d09f
2021-09-11T10:49:00.000Z
[ "pytorch", "marian", "text2text-generation", "to", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-to-fr
11
null
transformers
10,978
--- tags: - translation license: apache-2.0 --- ### opus-mt-to-fr * source languages: to * target languages: fr * OPUS readme: [to-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.fr | 27.9 | 0.456 |
Helsinki-NLP/opus-mt-tr-az
07265265c0a05859e0f82e1e04360ccbcbe25fb0
2020-08-21T14:42:51.000Z
[ "pytorch", "marian", "text2text-generation", "tr", "az", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-tr-az
11
1
transformers
10,979
--- language: - tr - az tags: - translation license: apache-2.0 --- ### tur-aze * source group: Turkish * target group: Azerbaijani * OPUS readme: [tur-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md) * model: transformer-align * source language(s): tur * target language(s): aze_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.aze | 27.7 | 0.551 | ### System Info: - hf_name: tur-aze - source_languages: tur - target_languages: aze - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'az'] - src_constituents: {'tur'} - tgt_constituents: {'aze_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt - src_alpha3: tur - tgt_alpha3: aze - short_pair: tr-az - chrF2_score: 0.551 - bleu: 27.7 - brevity_penalty: 1.0 - ref_len: 5436.0 - src_name: Turkish - tgt_name: Azerbaijani - train_date: 2020-06-16 - src_alpha2: tr - tgt_alpha2: az - prefer_old: False - long_pair: tur-aze - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-uk-fi
61cc07a5eda3c70d48fc7834a8b9d8713a7806ed
2021-09-11T10:51:22.000Z
[ "pytorch", "marian", "text2text-generation", "uk", "fi", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-uk-fi
11
null
transformers
10,980
--- tags: - translation license: apache-2.0 --- ### opus-mt-uk-fi * source languages: uk * target languages: fi * OPUS readme: [uk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.uk.fi | 24.4 | 0.490 |
Helsinki-NLP/opus-mt-wls-fr
5880ec6be460a23a0793a0db2bff5cf8ce649bc8
2021-09-11T10:52:13.000Z
[ "pytorch", "marian", "text2text-generation", "wls", "fr", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-wls-fr
11
null
transformers
10,981
--- tags: - translation license: apache-2.0 --- ### opus-mt-wls-fr * source languages: wls * target languages: fr * OPUS readme: [wls-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.wls.fr | 22.6 | 0.389 |
LysandreJik/local_dir
a1dc5a26b81c407302cab46144d78fa6a3048e80
2021-09-09T15:51:28.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
LysandreJik
null
LysandreJik/local_dir
11
null
transformers
10,982
Entry not found
Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
65b3b51f511e1b215984ea641b2959ac8d8c774a
2021-11-30T14:17:19.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:adecorpusv2", "transformers", "sagemaker", "bert-base-uncased", "text classification", "license:apache-2.0", "model-index" ]
text-classification
false
Jorgeutd
null
Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
11
null
transformers
10,983
--- language: en widget: - text: "I got a rash from taking acetaminophen" tags: - sagemaker - bert-base-uncased - text classification license: apache-2.0 datasets: - adecorpusv2 model-index: - name: BERT-ade_corpus results: - task: name: Text Classification type: text-classification dataset: name: "ade_corpus_v2Ade_corpus_v2_classification" type: ade_corpus metrics: - name: Validation Accuracy type: accuracy value: 92.98 - name: Validation F1 type: f1 value: 82.73 --- ## bert-base-uncased This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. - Problem type: Text Classification(adverse drug effects detection). ## Hyperparameters ```json { "do_eval": true, "do_train": true, "fp16": true, "load_best_model_at_end": true, "model_name": "bert-base-uncased", "num_train_epochs": 10, "per_device_eval_batch_size": 16, "per_device_train_batch_size": 16, "learning_rate":5e-5 } ``` ## Validation Metrics | key | value | | --- | ----- | | eval_accuracy | 0.9298021697511167 | | eval_auc | 0.8902672664394546 | | eval_f1 | 0.827315541601256 | | eval_loss | 0.17835010588169098 | | eval_recall | 0.8234375 | | eval_precision | 0.831230283911672 | ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I got a rash from taking acetaminophen"}' https://api-inference.huggingface.co/models/Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2 ``` """
KBLab/electra-small-swedish-cased-generator
8a4906f83401fe2b1f454aa855ec85a30df61e6b
2020-10-21T08:17:40.000Z
[ "pytorch", "tf", "electra", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
KBLab
null
KBLab/electra-small-swedish-cased-generator
11
null
transformers
10,984
Entry not found
LeBenchmark/wav2vec2-FR-1K-large
f766b6c8eb44300bc4a66d4896ef782416912d1f
2021-11-30T04:21:31.000Z
[ "pytorch", "jax", "wav2vec2", "feature-extraction", "fr", "transformers", "license:apache-2.0" ]
feature-extraction
false
LeBenchmark
null
LeBenchmark/wav2vec2-FR-1K-large
11
null
transformers
10,985
--- language: "fr" thumbnail: tags: - wav2vec2 license: "apache-2.0" --- # LeBenchmark: wav2vec2 large model trained on 1K hours of French speech LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark](https://openreview.net/pdf?id=TSvj5dmuSd) ## Model and data descriptions We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short: - [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown). - [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown). - [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown). - [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown). - [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**). - [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females). - [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females). ## Intended uses & limitations Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced. ## Fine-tune with Fairseq for ASR with CTC As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english). Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part. ## Integrate to SpeechBrain for ASR, Speaker, Source Separation ... Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies. While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models! 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ... 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer. **If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)** ## Referencing LeBenchmark ``` @article{Evain2021LeBenchmarkAR, title={LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech}, author={Sol{\`e}ne Evain and Ha Nguyen and Hang Le and Marcely Zanon Boito and Salima Mdhaffar and Sina Alisamir and Ziyi Tong and N. Tomashenko and Marco Dinarelli and Titouan Parcollet and A. Allauzen and Y. Est{\`e}ve and B. Lecouteux and F. Portet and S. Rossato and F. Ringeval and D. Schwab and L. Besacier}, journal={ArXiv}, year={2021}, volume={abs/2104.11462} } ```
Maaly/host
18da8d2a8d19bf16e8b7cbe1463c637a0cbc3639
2022-05-28T15:33:10.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Maaly
null
Maaly/host
11
null
transformers
10,986
host model is a Named Entity Recognition (NER) model that identifies and annotates the host (living organism) of microbiome samples in texts. The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations Testing examples: 1. Turkestan cockroach nymphs (Finke, 2013) were fed to the treefrogs at a quantity of 10% of treefrog biomass twice a week. 2. Samples were collected from clinically healthy giant pandas (five females and four males) at the China Conservation and Research Center for Giant Pandas (Ya'an, China). 3. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
Marc/pegasus_xsum_gigaword
e7c9ae792ba42b2a6735c85472a986348d8b4e78
2021-03-26T22:49:11.000Z
[ "pytorch", "pegasus", "text2text-generation", "dataset:XSUM", "dataset:Gigaword", "transformers", "autotrain_compatible" ]
text2text-generation
false
Marc
null
Marc/pegasus_xsum_gigaword
11
null
transformers
10,987
--- language: - English - thumbnail: tags: - - - license: datasets: - XSUM - Gigaword metrics: - Rouge - --- # Pegasus XSUM Gigaword ## Model description Pegasus XSUM model finetuned to Gigaword Summarization task, significantly better performance than pegasus gigaword, but still doesn't match model paper performance. ## Intended uses & limitations Produces short summaries with the coherence of the XSUM Model #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination. ## Training data Initialized with pegasus-XSUM ## Training procedure Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters) ## Eval results Evaluated on Gigaword test set (from hugging face using the default parameters) run_summarization.py --model_name_or_path pegasus-xsum/checkpoint-11500/ --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 34.1958 | | eval_rouge2 | 15.4033 | | eval_rougeL | 31.4488 | run_summarization.py --model_name_or_path google/pegasus-gigaword --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 20.8111 | | eval_rouge2 | 8.766 | | eval_rougeL | 18.4431 | ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
MoritzLaurer/MiniLM-L6-mnli
6e0917f1a395b7a6c0f054a56b91c45d8e3af92f
2021-12-13T10:36:43.000Z
[ "pytorch", "bert", "text-classification", "en", "transformers", "zero-shot-classification" ]
text-classification
false
MoritzLaurer
null
MoritzLaurer/MiniLM-L6-mnli
11
null
transformers
10,988
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy widget: - text: "I liked the movie. [SEP] The movie was good." --- # MiniLM-L6-mnli ## Model description This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset. The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/MiniLM-L6-mnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I liked the movie" hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data [MultiNLI](https://huggingface.co/datasets/multi_nli). ### Training procedure MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814 ## Limitations and bias Please consult the original MiniLM paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
MoseliMotsoehli/zuBERTa
1b62500f041b003632383e96ec790ea6c0d435ce
2021-05-20T12:14:07.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "zu", "transformers", "autotrain_compatible" ]
fill-mask
false
MoseliMotsoehli
null
MoseliMotsoehli/zuBERTa
11
null
transformers
10,989
--- language: zu --- # zuBERTa zuBERTa is a RoBERTa style transformer language model trained on zulu text. ## Intended uses & limitations The model can be used for getting embeddings to use on a down-stream task such as question answering. #### How to use ```python >>> from transformers import pipeline >>> from transformers import AutoTokenizer, AutoModelWithLMHead >>> tokenizer = AutoTokenizer.from_pretrained("MoseliMotsoehli/zuBERTa") >>> model = AutoModelWithLMHead.from_pretrained("MoseliMotsoehli/zuBERTa") >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) >>> unmasker("Abafika eNkandla bafika sebeholwa <mask> uMpongo kaZingelwayo.") [ { "sequence": "<s>Abafika eNkandla bafika sebeholwa khona uMpongo kaZingelwayo.</s>", "score": 0.050459690392017365, "token": 555, "token_str": "Ġkhona" }, { "sequence": "<s>Abafika eNkandla bafika sebeholwa inkosi uMpongo kaZingelwayo.</s>", "score": 0.03668094798922539, "token": 2321, "token_str": "Ġinkosi" }, { "sequence": "<s>Abafika eNkandla bafika sebeholwa ubukhosi uMpongo kaZingelwayo.</s>", "score": 0.028774697333574295, "token": 5101, "token_str": "Ġubukhosi" } ] ``` ## Training data 1. 30k sentences of text, came from the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download) of zulu 2018. These were collected from news articles and creative writtings. 2. ~7500 articles of human generated translations were scraped from the zulu [wikipedia](https://zu.wikipedia.org/wiki/Special:AllPages). ### BibTeX entry and citation info ```bibtex @inproceedings{author = {Moseli Motsoehli}, title = {Towards transformation of Southern African language models through transformers.}, year={2020} } ```
Muennighoff/SGPT-2.7B-weightedmean-msmarco-specb-bitfit
41063e375f26777b80416ee80e2b619343fabcbd
2022-06-18T20:55:55.000Z
[ "pytorch", "gpt_neo", "feature-extraction", "arxiv:2202.08904", "sentence-transformers", "sentence-similarity" ]
sentence-similarity
false
Muennighoff
null
Muennighoff/SGPT-2.7B-weightedmean-msmarco-specb-bitfit
11
null
sentence-transformers
10,990
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # SGPT-2.7B-weightedmean-msmarco-specb-bitfit ## Usage For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt ## Evaluation Results For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904 ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 124796 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 7.5e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel (1): Pooling({'word_embedding_dimension': 2560, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ```bibtex @article{muennighoff2022sgpt, title={SGPT: GPT Sentence Embeddings for Semantic Search}, author={Muennighoff, Niklas}, journal={arXiv preprint arXiv:2202.08904}, year={2022} } ```
NDugar/3epoch-3large
6496d73b67afb79acf989cf4996f218fce9547f1
2021-11-30T17:34:56.000Z
[ "pytorch", "deberta-v2", "text-classification", "en", "arxiv:2006.03654", "transformers", "deberta-v3", "deberta-v2`", "deberta-mnli", "license:mit", "zero-shot-classification" ]
zero-shot-classification
false
NDugar
null
NDugar/3epoch-3large
11
1
transformers
10,991
--- language: en tags: - deberta-v3 - deberta-v2` - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit pipeline_tag: zero-shot-classification --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data. ### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B | |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------| | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S | | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- | | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- | | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- | | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 | | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7| | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9| |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** | -------- #### Notes. - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks. - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory. Run with `Deepspeed`, ```bash pip install datasets pip install deepspeed # Download the deepspeed config file wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\ run_glue.py \\ --model_name_or_path microsoft/deberta-v2-xxlarge \\ --task_name $TASK_NAME \\ --do_train \\ --do_eval \\ --max_seq_length 256 \\ --per_device_train_batch_size ${batch_size} \\ --learning_rate 3e-6 \\ --num_train_epochs 3 \\ --output_dir $output_dir \\ --overwrite_output_dir \\ --logging_steps 10 \\ --logging_dir $output_dir \\ --deepspeed ds_config.json ``` You can also run with `--sharded_ddp` ```bash cd transformers/examples/text-classification/ export TASK_NAME=mnli python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16 ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
NYTK/summarization-hi-bart-hungarian
58b5985253f4214f10d07259e515fa8a7530866c
2022-02-14T13:27:17.000Z
[ "pytorch", "bart", "text2text-generation", "hu", "transformers", "summarization", "license:gpl", "autotrain_compatible" ]
summarization
false
NYTK
null
NYTK/summarization-hi-bart-hungarian
11
null
transformers
10,992
--- language: - hu tags: - summarization license: gpl metrics: - rouge widget: - text: "A Tisza-parti város állatkertjében régóta tartanak szurikátákat ( Suricata suricatta ) , de tavaly tavaszig nem sikerült szaporítani őket , annak ellenére , hogy tágas ház és kifutó épült számukra - közölte Veprik Róbert igazgató . 2010-ben alakult ki az új - három Amszterdamból származó nőstényből és egy budapesti fiatal hímből álló - csapat , amely szaporodni kezdett . 2011-ben három , idén pedig egy utóddal örvendeztették meg a gondozókat és az állatbarátokat . A szurikáták utódai - tizenegy hetes vemhesség után - október és március között vakon és szőrtelenül jönnek a világra . A kicsinyek háromhetesen bújnak elő az üregből , és nevelésükben mindkét szülő részt vesz . A szurikátacsapatokban a család tagjai nagyon szoros kapcsolatban állnak egymással , viszont nagyon harciasan fellépnek az idegenekkel szemben , akár meg is ölhetik azt az állatot , amelyet betolakodónak tekintenek . Bár a Dél-Afrikában , a Kalahári sivatagban őshonos cibetmacskaféle ragadozókat a szegedi állatkertben természetes élőhelyükhöz képest kevesebb veszély fenyegeti , a vadasparki erdőben ragadozó madarak is élnek , amelyek akár zsákmányként is tekinthetnének a szurikátákra . A szegedi csapatnál azonban szigorú őrség van , mindig lesi valaki két lábra állva a veszélyforrásokat . Az őrszemek figyelmét még a sárkányrepülők is felkeltik , és felbukkanásakor valamennyi egyed biztos helyre menekül . A szurikáták a Kalahári sivatag bozótos , sziklás területein csapatokban élnek . A 700 gramm körüli testtömegű ragadozók rovarokkal , lárvákkal , skorpiókkal táplálkoznak , de néha elfogyasztják a kisebb gerinceseket , tojásokat és növényi gumókat is . A nappal aktív állatok földalatti üregrendszert ásnak , amelynek több bejárata is van . Ha a szurikáták idegen csapattal vagy ragadozóval kerülnek szembe , azonnal elkezdenek ásni , nagy porfelhőt kavarva . Az is gyakorta előfordul , hogy szorosan egymáshoz bújnak , felborzolják szőrüket , megnyújtják testüket , hogy minél nagyobbnak látszódjanak . Az előadásuk csúcspontján pedig az egész csapat a levegőbe ugrik , közben pedig morog . A hangadás egyébként is fontos a szurikáták kapcsolatában , az egyedek legalább tízféle jelzést használnak a kolónián belül ." --- # Hungarian Abstractive Summarization BART model For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - BART base model (see Results Table - bold): - Pretrained on Webcorpus 2.0 - Finetuned HI corpus (hvg.hu + index.hu) - Segments: 559.162 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 512 - max_target_length = 256 ## Results | Model | HI | NOL | | ------------- | ------------- | ------------- | | BART-base-512 | **30.18/13.86/22.92** | 46.48/32.40/39.45 | | BART-base-1024| 31.86/14.59/23.79 | 47.01/32.91/39.97 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {{Yang Zijian Győző}}, pages = {15--29} } ```
NYTK/translation-bart-128-en-hu
e7ba362f3067d29ec1fbd38d3b7a7b4557dadb5c
2022-02-14T13:30:36.000Z
[ "pytorch", "bart", "text2text-generation", "en", "hu", "transformers", "translation", "license:gpl", "autotrain_compatible" ]
translation
false
NYTK
null
NYTK/translation-bart-128-en-hu
11
null
transformers
10,993
--- language: - en - hu tags: - translation license: gpl metrics: - sacrebleu - chrf widget: - text: "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter." example_title: "Translation: English-Hungarian" --- # BART Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Source language: English - Target language: Hungarian - BART base model: - Pretrained on English WikiText-103 and Hungarian Wikipedia - Finetuned on subcorpora from OPUS - Segments: 56.837.602 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) - max_source_length = 128 - max_target_length = 128 ## Results | Model | BLEU | chrF-3 | chrF-6 | | ------------- | ------------- | ------------- | ------------- | | Google | 25.30 | 54.09 | 49.0 | | **BART** | **36.89** | **60.77** | **56.4** | | mT5 | 27.69 | 53.73 | 48.57 | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {laki-yang-mt, title = {{Jobban fordítunk magyarra, mint a Google!}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Laki, László and Yang, Zijian Győző}, pages = {357--372} } ```
NYTK/translation-bart-en-hu
879ebeb975a3226c2336501fbda338b2095ecd9f
2022-02-14T13:28:40.000Z
[ "pytorch", "bart", "text2text-generation", "en", "hu", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
NYTK
null
NYTK/translation-bart-en-hu
11
null
transformers
10,994
--- language: - en - hu tags: - translation license: apache-2.0 metrics: - sacrebleu - chrf widget: - text: "This may not make much sense to you, sir, but I'd like to ask your permission to date your daughter." example_title: "Translation: English-Hungarian" --- # BART Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Source language: English - Target language: Hungarian - Pretrained on English WikiText-103 and Hungarian Wikipedia - Finetuned on subcorpora from OPUS - Segments: 56.837.602 ## Limitations - tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy)) ## Results | Model | BLEU | chrF-3 | | ------------- | ------------- | ------------- | | Google en-hu | 25.30 | 54.08 | | **BART-base-enhu** | **34.38** | **58.88** | | Google hu-en| 34.48 | 59.59 | | **BART-base-huen** | **38.03** | **61,37** | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {{Yang Zijian Győző}}, pages = {15--29} } ```
NbAiLab/XLSR-1B-bokmaal-low
0527f7470f6f6a352993d4032fa329da459bc2ea
2022-02-11T17:06:04.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "model-index" ]
automatic-speech-recognition
false
NbAiLab
null
NbAiLab/XLSR-1B-bokmaal-low
11
null
transformers
10,995
--- tags: - generated_from_trainer model-index: - name: XLSR-1B-bokmaal-low results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLSR-1B-bokmaal-low This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1579 - Wer: 0.0722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.7e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 34.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.434 | 0.24 | 500 | 0.1704 | 0.1378 | | 0.2833 | 0.48 | 1000 | 0.1638 | 0.1324 | | 0.2478 | 0.72 | 1500 | 0.1606 | 0.1240 | | 0.2276 | 0.97 | 2000 | 0.1562 | 0.1212 | | 0.2208 | 1.21 | 2500 | 0.1576 | 0.1172 | | 0.2148 | 1.45 | 3000 | 0.1502 | 0.1119 | | 0.1994 | 1.69 | 3500 | 0.1409 | 0.1110 | | 0.1932 | 1.93 | 4000 | 0.1432 | 0.1112 | | 0.2122 | 2.17 | 4500 | 0.1443 | 0.1098 | | 0.2177 | 2.42 | 5000 | 0.1329 | 0.1102 | | 0.2058 | 2.66 | 5500 | 0.1403 | 0.1070 | | 0.2216 | 2.9 | 6000 | 0.1342 | 0.1067 | | 0.1984 | 3.14 | 6500 | 0.1370 | 0.1030 | | 0.2056 | 3.38 | 7000 | 0.1371 | 0.1041 | | 0.1735 | 3.62 | 7500 | 0.1296 | 0.1003 | | 0.203 | 3.87 | 8000 | 0.1301 | 0.1005 | | 0.1835 | 4.11 | 8500 | 0.1310 | 0.1004 | | 0.178 | 4.35 | 9000 | 0.1300 | 0.0959 | | 0.1585 | 4.59 | 9500 | 0.1277 | 0.0966 | | 0.1848 | 4.83 | 10000 | 0.1260 | 0.0974 | | 0.169 | 5.07 | 10500 | 0.1281 | 0.0969 | | 0.1666 | 5.32 | 11000 | 0.1291 | 0.1003 | | 0.1552 | 5.56 | 11500 | 0.1271 | 0.0959 | | 0.2736 | 5.8 | 12000 | 0.1320 | 0.0935 | | 0.2845 | 6.04 | 12500 | 0.1299 | 0.0921 | | 0.1536 | 6.28 | 13000 | 0.1282 | 0.0927 | | 0.1491 | 6.52 | 13500 | 0.1240 | 0.0906 | | 0.1579 | 6.77 | 14000 | 0.1208 | 0.0921 | | 0.16 | 7.01 | 14500 | 0.1182 | 0.0903 | | 0.1367 | 7.25 | 15000 | 0.1214 | 0.0922 | | 0.1499 | 7.49 | 15500 | 0.1232 | 0.0916 | | 0.148 | 7.73 | 16000 | 0.1184 | 0.0896 | | 0.1426 | 7.97 | 16500 | 0.1201 | 0.0889 | | 0.1471 | 8.22 | 17000 | 0.1256 | 0.0882 | | 0.1358 | 8.46 | 17500 | 0.1265 | 0.0909 | | 0.1245 | 8.7 | 18000 | 0.1263 | 0.0886 | | 0.1407 | 8.94 | 18500 | 0.1226 | 0.0885 | | 0.1289 | 9.18 | 19000 | 0.1315 | 0.0873 | | 0.1326 | 9.42 | 19500 | 0.1233 | 0.0868 | | 0.1305 | 9.67 | 20000 | 0.1237 | 0.0870 | | 0.1432 | 9.91 | 20500 | 0.1234 | 0.0857 | | 0.1205 | 10.15 | 21000 | 0.1303 | 0.0858 | | 0.1248 | 10.39 | 21500 | 0.1252 | 0.0858 | | 0.1251 | 10.63 | 22000 | 0.1253 | 0.0869 | | 0.1143 | 10.87 | 22500 | 0.1266 | 0.0860 | | 0.1155 | 11.12 | 23000 | 0.1219 | 0.0862 | | 0.1227 | 11.36 | 23500 | 0.1329 | 0.0864 | | 0.1229 | 11.6 | 24000 | 0.1244 | 0.0855 | | 0.1112 | 11.84 | 24500 | 0.1356 | 0.0851 | | 0.2163 | 12.08 | 25000 | 0.1252 | 0.0847 | | 0.1146 | 12.32 | 25500 | 0.1211 | 0.0837 | | 0.1058 | 12.57 | 26000 | 0.1247 | 0.0843 | | 0.1099 | 12.81 | 26500 | 0.1189 | 0.0833 | | 0.1028 | 13.05 | 27000 | 0.1303 | 0.0815 | | 0.1092 | 13.29 | 27500 | 0.1305 | 0.0838 | | 0.1076 | 13.53 | 28000 | 0.1276 | 0.0842 | | 0.1074 | 13.77 | 28500 | 0.1268 | 0.0844 | | 0.0971 | 14.02 | 29000 | 0.1322 | 0.0839 | | 0.1109 | 14.26 | 29500 | 0.1287 | 0.0821 | | 0.0991 | 14.5 | 30000 | 0.1289 | 0.0831 | | 0.1095 | 14.74 | 30500 | 0.1273 | 0.0822 | | 0.1015 | 14.98 | 31000 | 0.1326 | 0.0816 | | 0.1051 | 15.22 | 31500 | 0.1337 | 0.0814 | | 0.0894 | 15.47 | 32000 | 0.1331 | 0.0802 | | 0.1 | 15.71 | 32500 | 0.1304 | 0.0798 | | 0.0957 | 15.95 | 33000 | 0.1293 | 0.0824 | | 0.0921 | 16.19 | 33500 | 0.1382 | 0.0808 | | 0.0986 | 16.43 | 34000 | 0.1301 | 0.0788 | | 0.098 | 16.67 | 34500 | 0.1305 | 0.0795 | | 0.0974 | 16.92 | 35000 | 0.1325 | 0.0796 | | 0.0886 | 17.16 | 35500 | 0.1332 | 0.0796 | | 0.0892 | 17.4 | 36000 | 0.1327 | 0.0785 | | 0.0917 | 17.64 | 36500 | 0.1304 | 0.0793 | | 0.0919 | 17.88 | 37000 | 0.1353 | 0.0791 | | 0.1007 | 18.12 | 37500 | 0.1340 | 0.0791 | | 0.0831 | 18.37 | 38000 | 0.1327 | 0.0786 | | 0.0862 | 18.61 | 38500 | 0.1343 | 0.0792 | | 0.0837 | 18.85 | 39000 | 0.1334 | 0.0777 | | 0.0771 | 19.09 | 39500 | 0.1456 | 0.0778 | | 0.0841 | 19.33 | 40000 | 0.1365 | 0.0784 | | 0.0874 | 19.57 | 40500 | 0.1379 | 0.0779 | | 0.0773 | 19.82 | 41000 | 0.1359 | 0.0776 | | 0.0771 | 20.06 | 41500 | 0.1392 | 0.0776 | | 0.0861 | 20.3 | 42000 | 0.1395 | 0.0774 | | 0.0773 | 20.54 | 42500 | 0.1356 | 0.0775 | | 0.069 | 20.78 | 43000 | 0.1399 | 0.0765 | | 0.0823 | 21.02 | 43500 | 0.1469 | 0.0774 | | 0.0747 | 21.27 | 44000 | 0.1415 | 0.0768 | | 0.0703 | 21.51 | 44500 | 0.1405 | 0.0778 | | 0.0776 | 21.75 | 45000 | 0.1492 | 0.0778 | | 0.0833 | 21.99 | 45500 | 0.1448 | 0.0767 | | 0.0796 | 22.23 | 46000 | 0.1434 | 0.0761 | | 0.0613 | 22.47 | 46500 | 0.1446 | 0.0768 | | 0.0753 | 22.72 | 47000 | 0.1439 | 0.0757 | | 0.076 | 22.96 | 47500 | 0.1402 | 0.0759 | | 0.0619 | 23.2 | 48000 | 0.1473 | 0.0767 | | 0.1322 | 23.44 | 48500 | 0.1431 | 0.0766 | | 0.0691 | 23.68 | 49000 | 0.1452 | 0.0753 | | 0.061 | 23.92 | 49500 | 0.1452 | 0.0752 | | 0.0716 | 24.17 | 50000 | 0.1429 | 0.0756 | | 0.074 | 24.41 | 50500 | 0.1440 | 0.0746 | | 0.0696 | 24.65 | 51000 | 0.1459 | 0.0756 | | 0.081 | 24.89 | 51500 | 0.1443 | 0.0751 | | 0.0754 | 25.13 | 52000 | 0.1483 | 0.0755 | | 0.0864 | 25.37 | 52500 | 0.1467 | 0.0757 | | 0.0662 | 25.62 | 53000 | 0.1471 | 0.0748 | | 0.109 | 25.86 | 53500 | 0.1472 | 0.0759 | | 0.0682 | 26.1 | 54000 | 0.1539 | 0.0748 | | 0.0655 | 26.34 | 54500 | 0.1469 | 0.0743 | | 0.0651 | 26.58 | 55000 | 0.1553 | 0.0748 | | 0.0666 | 26.82 | 55500 | 0.1520 | 0.0744 | | 0.0724 | 27.07 | 56000 | 0.1526 | 0.0738 | | 0.067 | 27.31 | 56500 | 0.1489 | 0.0738 | | 0.0658 | 27.55 | 57000 | 0.1518 | 0.0738 | | 0.0581 | 27.79 | 57500 | 0.1518 | 0.0739 | | 0.0639 | 28.03 | 58000 | 0.1495 | 0.0736 | | 0.0606 | 28.27 | 58500 | 0.1549 | 0.0739 | | 0.0641 | 28.52 | 59000 | 0.1513 | 0.0735 | | 0.0612 | 28.76 | 59500 | 0.1524 | 0.0739 | | 0.0536 | 29.0 | 60000 | 0.1565 | 0.0741 | | 0.0574 | 29.24 | 60500 | 0.1541 | 0.0741 | | 0.057 | 29.48 | 61000 | 0.1555 | 0.0741 | | 0.0624 | 29.72 | 61500 | 0.1590 | 0.0736 | | 0.0531 | 29.97 | 62000 | 0.1590 | 0.0734 | | 0.0661 | 30.21 | 62500 | 0.1599 | 0.0732 | | 0.0641 | 30.45 | 63000 | 0.1576 | 0.0730 | | 0.0562 | 30.69 | 63500 | 0.1593 | 0.0734 | | 0.0527 | 30.93 | 64000 | 0.1604 | 0.0730 | | 0.0579 | 31.17 | 64500 | 0.1571 | 0.0734 | | 0.0508 | 31.42 | 65000 | 0.1603 | 0.0733 | | 0.0524 | 31.66 | 65500 | 0.1588 | 0.0726 | | 0.0564 | 31.9 | 66000 | 0.1571 | 0.0727 | | 0.0551 | 32.14 | 66500 | 0.1584 | 0.0728 | | 0.0564 | 32.38 | 67000 | 0.1565 | 0.0726 | | 0.0628 | 32.62 | 67500 | 0.1558 | 0.0725 | | 0.0561 | 32.87 | 68000 | 0.1582 | 0.0727 | | 0.0553 | 33.11 | 68500 | 0.1591 | 0.0726 | | 0.0504 | 33.35 | 69000 | 0.1590 | 0.0725 | | 0.0539 | 33.59 | 69500 | 0.1582 | 0.0723 | | 0.0576 | 33.83 | 70000 | 0.1579 | 0.0722 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
NbAiLab/test_w5_long
09a805d260186524481aa1028efaf9b1303bd7ce
2021-12-16T12:46:14.000Z
[ "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
NbAiLab
null
NbAiLab/test_w5_long
11
null
transformers
10,996
Just for performing some experiments. Do not use.
Nokia/nlgp-natural
b874c088fcfafda664d8e3ef47e94aecbf7f86b2
2022-02-18T14:16:33.000Z
[ "pytorch", "gpt2", "text-generation", "en", "python", "arxiv:2108.05198", "transformers", "code completion", "code generation", "license:apache-2.0" ]
text-generation
false
Nokia
null
Nokia/nlgp-natural
11
null
transformers
10,997
--- language: - en - python tags: - code completion - code generation license: "apache-2.0" --- # NLGP natural model The NLGP natural model was introduced in the paper [Natural Language-Guided Programming](https://arxiv.org/abs/2108.05198). The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language **intent** in a certain code **context** (see the example below). This work was carried out by a research team in Nokia Bell Labs. **Context** ```py import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] ``` **Intent** ```py # plot a bar chart ``` **Prediction** ```py plt.bar(labels, values) plt.show() ``` ## Usage ```py import re from transformers import GPT2LMHeadModel, GPT2TokenizerFast # load the model tok = GPT2TokenizerFast.from_pretrained("Nokia/nlgp-natural") model = GPT2LMHeadModel.from_pretrained("Nokia/nlgp-natural") # preprocessing functions num_spaces = [2, 4, 6, 8, 10, 12, 14, 16, 18] def preprocess(context, query): """ Encodes context + query as a single string and replaces whitespace with special tokens <|2space|>, <|4space|>, ... """ input_str = f"{context}\n{query} <|endofcomment|>\n" indentation_symbols = {n: f"<|{n}space|>" for n in num_spaces} m = re.match("^[ ]+", input_str) if not m: return input_str leading_whitespace = m.group(0) N = len(leading_whitespace) for n in self.num_spaces: leading_whitespace = leading_whitespace.replace(n * " ", self.indentation_symbols[n]) return leading_whitespace + input_str[N:] detokenize_pattern = re.compile(fr"<\|(\d+)space\|>") def postprocess(output): output = output.split("<|cell|>")[0] def insert_space(m): num_spaces = int(m.group(1)) return num_spaces * " " return detokenize_pattern.sub(insert_space, output) # inference code_context = """ import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] """ query = "# plot a bar chart" input_str = preprocess(code_context, query) input_ids = tok(input_str, return_tensors="pt").input_ids max_length = 150 # don't generate output longer than this length total_max_length = min(1024 - input_ids.shape[-1], input_ids.shape[-1] + 150) # total = input + output input_and_output = model.generate( input_ids=input_ids, max_length=total_max_length, min_length=10, do_sample=False, num_beams=4, early_stopping=True, eos_token_id=tok.encode("<|cell|>")[0] ) output = input_and_output[:, input_ids.shape[-1]:] # remove the tokens that correspond to the input_str output_str = tok.decode(output[0]) postprocess(output_str) ``` ## License and copyright Copyright 2021 Nokia Licensed under the Apache License 2.0 SPDX-License-Identifier: Apache-2.0
PhilSad/gpt-scp-neo-125M
fc566b64d003ca29bbaf856bbb6d3ac990ddd5d7
2022-02-23T22:41:55.000Z
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
PhilSad
null
PhilSad/gpt-scp-neo-125M
11
null
transformers
10,998
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: output_gptneo125-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_gptneo125-2 This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
RJ3vans/CLNspanTagger
e1f68e7537552b5686c3829240bf30e98f6d190a
2021-09-07T13:24:46.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
RJ3vans
null
RJ3vans/CLNspanTagger
11
null
transformers
10,999
This model identifies compound nouns in input sentences. Try the test sentence: I love apples [and] potatoes. Accuracy is best when you place square brackets around the coordinating conjunction. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.