lbourdois commited on
Commit
e2da2e9
·
1 Parent(s): 5dc6231

Add multilingual to the language tag

Browse files

Hi! A PR to add multilingual to the language tag to improve the referencing.

Files changed (1) hide show
  1. README.md +12 -22
README.md CHANGED
@@ -3,49 +3,39 @@ language:
3
  - cs
4
  - sk
5
  - uk
 
 
6
  tags:
7
  - translation
8
  - opus-mt-tc
9
- license: cc-by-4.0
10
  model-index:
11
  - name: opus-mt-tc-base-ces_slk-uk
12
  results:
13
  - task:
14
- name: Translation ces-ukr
15
  type: translation
16
- args: ces-ukr
17
  dataset:
18
  name: flores101-devtest
19
  type: flores_101
20
  args: ces ukr devtest
21
  metrics:
22
- - name: BLEU
23
- type: bleu
24
  value: 21.8
25
- - task:
26
- name: Translation slk-ukr
27
- type: translation
28
- args: slk-ukr
29
- dataset:
30
- name: flores101-devtest
31
- type: flores_101
32
- args: slk ukr devtest
33
- metrics:
34
- - name: BLEU
35
- type: bleu
36
  value: 21.4
 
37
  - task:
38
- name: Translation ces-ukr
39
  type: translation
40
- args: ces-ukr
41
  dataset:
42
  name: tatoeba-test-v2021-08-07
43
  type: tatoeba_mt
44
  args: ces-ukr
45
  metrics:
46
- - name: BLEU
47
- type: bleu
48
  value: 48.6
 
49
  ---
50
  # opus-mt-tc-base-ces_slk-uk
51
 
@@ -53,7 +43,7 @@ Neural machine translation model for translating from Czech and Slovak (cs+sk) t
53
 
54
  This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
55
 
56
- * Publications: [OPUS-MT Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
57
 
58
  ```
59
  @inproceedings{tiedemann-thottingal-2020-opus,
@@ -136,7 +126,7 @@ print(pipe("Replace this with text in an accepted source language."))
136
 
137
  ## Acknowledgements
138
 
139
- The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unions Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
140
 
141
  ## Model conversion info
142
 
 
3
  - cs
4
  - sk
5
  - uk
6
+ - multilingual
7
+ license: cc-by-4.0
8
  tags:
9
  - translation
10
  - opus-mt-tc
 
11
  model-index:
12
  - name: opus-mt-tc-base-ces_slk-uk
13
  results:
14
  - task:
 
15
  type: translation
16
+ name: Translation ces-ukr
17
  dataset:
18
  name: flores101-devtest
19
  type: flores_101
20
  args: ces ukr devtest
21
  metrics:
22
+ - type: bleu
 
23
  value: 21.8
24
+ name: BLEU
25
+ - type: bleu
 
 
 
 
 
 
 
 
 
26
  value: 21.4
27
+ name: BLEU
28
  - task:
 
29
  type: translation
30
+ name: Translation ces-ukr
31
  dataset:
32
  name: tatoeba-test-v2021-08-07
33
  type: tatoeba_mt
34
  args: ces-ukr
35
  metrics:
36
+ - type: bleu
 
37
  value: 48.6
38
+ name: BLEU
39
  ---
40
  # opus-mt-tc-base-ces_slk-uk
41
 
 
43
 
44
  This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
45
 
46
+ * Publications: [OPUS-MT Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
47
 
48
  ```
49
  @inproceedings{tiedemann-thottingal-2020-opus,
 
126
 
127
  ## Acknowledgements
128
 
129
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unions Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
130
 
131
  ## Model conversion info
132