modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-fr-guw | f82d6a8dcbf9259bbd46578112af9b9ac9a2b00d | 2021-09-09T21:54:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"guw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-guw | 8 | null | transformers | 12,900 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-guw
* source languages: fr
* target languages: guw
* OPUS readme: [fr-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.guw | 31.4 | 0.505 |
|
Helsinki-NLP/opus-mt-fr-ho | 3d3587e677fa54c24f19a34e58fe7e73cad61c2a | 2021-09-09T21:54:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ho",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ho | 8 | null | transformers | 12,901 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ho
* source languages: fr
* target languages: ho
* OPUS readme: [fr-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ho | 25.4 | 0.480 |
|
Helsinki-NLP/opus-mt-fr-ig | a4ce5e546406711d7702c6b1cf7c388051913800 | 2021-09-09T21:54:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ig",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ig | 8 | null | transformers | 12,902 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ig
* source languages: fr
* target languages: ig
* OPUS readme: [fr-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ig | 29.0 | 0.445 |
|
Helsinki-NLP/opus-mt-fr-kg | 900be309d6bc4c43e709f221cebba3709784435d | 2021-09-09T21:54:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"kg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-kg | 8 | null | transformers | 12,903 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-kg
* source languages: fr
* target languages: kg
* OPUS readme: [fr-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kg | 30.4 | 0.523 |
|
Helsinki-NLP/opus-mt-fr-kwy | 17adca5852eac8966f1fb6807b7da83ecf1a2b51 | 2021-09-09T21:54:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"kwy",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-kwy | 8 | null | transformers | 12,904 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-kwy
* source languages: fr
* target languages: kwy
* OPUS readme: [fr-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kwy/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kwy | 22.5 | 0.428 |
|
Helsinki-NLP/opus-mt-fr-lue | ab453e6d3d2556c889ccc3562e15fa125d667901 | 2021-09-09T21:55:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"lue",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-lue | 8 | null | transformers | 12,905 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-lue
* source languages: fr
* target languages: lue
* OPUS readme: [fr-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lue/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lue | 23.1 | 0.485 |
|
Helsinki-NLP/opus-mt-fr-sm | 4c98ed70568463704055a233f802219748caa75f | 2021-09-09T21:56:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"sm",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-sm | 8 | null | transformers | 12,906 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-sm
* source languages: fr
* target languages: sm
* OPUS readme: [fr-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sm | 28.8 | 0.474 |
|
Helsinki-NLP/opus-mt-fr-st | 891528cfd2f238ad0009a2a5a69075b2e501f5b9 | 2021-09-09T21:57:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"st",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-st | 8 | null | transformers | 12,907 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-st
* source languages: fr
* target languages: st
* OPUS readme: [fr-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.st | 34.6 | 0.540 |
|
Helsinki-NLP/opus-mt-fr-tiv | 5618d4e7d13a5eee8e18a7c15ae962415e1800b4 | 2021-09-09T21:57:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"tiv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-tiv | 8 | null | transformers | 12,908 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-tiv
* source languages: fr
* target languages: tiv
* OPUS readme: [fr-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tiv | 23.5 | 0.406 |
|
Helsinki-NLP/opus-mt-fr-tpi | 99d2ac6cf7d16cc6a101e70929f5e7efa7b64f23 | 2021-09-09T21:57:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"tpi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-tpi | 8 | null | transformers | 12,909 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-tpi
* source languages: fr
* target languages: tpi
* OPUS readme: [fr-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tpi | 30.0 | 0.487 |
|
Helsinki-NLP/opus-mt-fr-war | e9ee39c9f86e17af0970402405e6f79a4cdebb32 | 2021-09-09T21:58:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"war",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-war | 8 | null | transformers | 12,910 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-war
* source languages: fr
* target languages: war
* OPUS readme: [fr-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.war | 33.7 | 0.538 |
|
Helsinki-NLP/opus-mt-fse-fi | 4578d0593f0d3f63d716b5aa5a3d3bdd5af78418 | 2021-09-09T21:58:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fse",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fse-fi | 8 | null | transformers | 12,911 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fse-fi
* source languages: fse
* target languages: fi
* OPUS readme: [fse-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fse-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fse.fi | 90.2 | 0.943 |
|
Helsinki-NLP/opus-mt-gaa-sv | 974970cef60cd58c331f6112a6d5b9f403f13c4f | 2021-09-09T21:58:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gaa",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gaa-sv | 8 | null | transformers | 12,912 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gaa-sv
* source languages: gaa
* target languages: sv
* OPUS readme: [gaa-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.sv | 30.1 | 0.489 |
|
Helsinki-NLP/opus-mt-ha-sv | a57326f406249cd9cf3aa270014bb73d94db04ae | 2021-09-09T22:00:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ha",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ha-sv | 8 | null | transformers | 12,913 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ha-sv
* source languages: ha
* target languages: sv
* OPUS readme: [ha-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.sv | 25.8 | 0.438 |
|
Helsinki-NLP/opus-mt-he-ru | 50425e3b84a0470bcf42647ad6bab761bd12d39a | 2020-10-26T14:32:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-he-ru | 8 | null | transformers | 12,914 | ---
language:
- he
- ru
tags:
- translation
license: apache-2.0
---
### he-ru
* source group: Hebrew
* target group: Russian
* OPUS readme: [heb-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-rus/README.md)
* model: transformer
* source language(s): heb
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.rus | 40.5 | 0.599 |
### System Info:
- hf_name: he-ru
- source_languages: heb
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'ru']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Russian', {'rus'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-rus
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.test.txt
- src_alpha3: heb
- tgt_alpha3: rus
- chrF2_score: 0.599
- bleu: 40.5
- brevity_penalty: 0.963
- ref_len: 16583.0
- src_name: Hebrew
- tgt_name: Russian
- train_date: 2020-10-04 00:00:00
- src_alpha2: he
- tgt_alpha2: ru
- prefer_old: False
- short_pair: he-ru
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
- port_machine: LM0-400-22516.local
- port_time: 2020-10-26-16:16 |
Helsinki-NLP/opus-mt-ht-sv | eb102498c382a7a8ea26d668c58de3454bd02cfb | 2021-09-09T22:10:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ht",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ht-sv | 8 | null | transformers | 12,915 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ht-sv
* source languages: ht
* target languages: sv
* OPUS readme: [ht-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.sv | 27.9 | 0.463 |
|
Helsinki-NLP/opus-mt-hu-de | fc7591189b7d14c929716db64ca8f48139229272 | 2021-09-09T22:10:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hu",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hu-de | 8 | null | transformers | 12,916 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hu-de
* source languages: hu
* target languages: de
* OPUS readme: [hu-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.de | 44.1 | 0.637 |
|
Helsinki-NLP/opus-mt-is-it | e7732da6a79bb92b135272e61deacc72b56fbc4a | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"is",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-is-it | 8 | null | transformers | 12,917 | ---
language:
- is
- it
tags:
- translation
license: apache-2.0
---
### isl-ita
* source group: Icelandic
* target group: Italian
* OPUS readme: [isl-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-ita/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.ita | 46.7 | 0.662 |
### System Info:
- hf_name: isl-ita
- source_languages: isl
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'it']
- src_constituents: {'isl'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: ita
- short_pair: is-it
- chrF2_score: 0.662
- bleu: 46.7
- brevity_penalty: 0.977
- ref_len: 1450.0
- src_name: Icelandic
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: it
- prefer_old: False
- long_pair: isl-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-it-uk | 1fd7fedea7253943611ab9ad7490d5e5e51b8c3d | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-uk | 8 | null | transformers | 12,918 | ---
language:
- it
- uk
tags:
- translation
license: apache-2.0
---
### ita-ukr
* source group: Italian
* target group: Ukrainian
* OPUS readme: [ita-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ukr/README.md)
* model: transformer-align
* source language(s): ita
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.ukr | 45.9 | 0.657 |
### System Info:
- hf_name: ita-ukr
- source_languages: ita
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'uk']
- src_constituents: {'ita'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.test.txt
- src_alpha3: ita
- tgt_alpha3: ukr
- short_pair: it-uk
- chrF2_score: 0.657
- bleu: 45.9
- brevity_penalty: 0.9890000000000001
- ref_len: 25353.0
- src_name: Italian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: it
- tgt_alpha2: uk
- prefer_old: False
- long_pair: ita-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ja-sh | a91efa3afbffefffeb79f194329359dbf31a013c | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"sh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-sh | 8 | null | transformers | 12,919 | ---
language:
- ja
- sh
tags:
- translation
license: apache-2.0
---
### jpn-hbs
* source group: Japanese
* target group: Serbo-Croatian
* OPUS readme: [jpn-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hbs/README.md)
* model: transformer-align
* source language(s): jpn jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Latn
* target language(s): bos_Latn hrv srp_Cyrl srp_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.hbs | 22.6 | 0.447 |
### System Info:
- hf_name: jpn-hbs
- source_languages: jpn
- target_languages: hbs
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hbs/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'sh']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: hbs
- short_pair: ja-sh
- chrF2_score: 0.447
- bleu: 22.6
- brevity_penalty: 0.9620000000000001
- ref_len: 2525.0
- src_name: Japanese
- tgt_name: Serbo-Croatian
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: sh
- prefer_old: False
- long_pair: jpn-hbs
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-lua-es | 84469eddef8c7e34548a0c63a2de88149c3171e7 | 2021-09-10T13:56:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lua",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lua-es | 8 | null | transformers | 12,920 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-es
* source languages: lua
* target languages: es
* OPUS readme: [lua-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.es | 23.1 | 0.409 |
|
Helsinki-NLP/opus-mt-mt-fi | 82b2a8e69a6acfbedddcd990a2323fc38ae7424d | 2021-09-10T13:58:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mt",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mt-fi | 8 | null | transformers | 12,921 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mt-fi
* source languages: mt
* target languages: fi
* OPUS readme: [mt-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.fi | 24.9 | 0.509 |
|
Helsinki-NLP/opus-mt-niu-sv | 575c24b76ecf85b0e76037e6c322abfedc62626a | 2021-09-10T13:59:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"niu",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-niu-sv | 8 | null | transformers | 12,922 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-sv
* source languages: niu
* target languages: sv
* OPUS readme: [niu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.sv | 29.2 | 0.478 |
|
Helsinki-NLP/opus-mt-nso-sv | b61e875c24050a31105c5f39c7c932885f52371c | 2021-09-10T13:59:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nso",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nso-sv | 8 | null | transformers | 12,923 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-sv
* source languages: nso
* target languages: sv
* OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.sv | 34.3 | 0.527 |
|
Helsinki-NLP/opus-mt-pis-es | 613ca066962c8da1b207e234b61fc7b19dcf8c4a | 2021-09-10T14:00:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pis",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pis-es | 8 | null | transformers | 12,924 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pis-es
* source languages: pis
* target languages: es
* OPUS readme: [pis-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.es | 24.1 | 0.421 |
|
Helsinki-NLP/opus-mt-rnd-fr | a58002372dfe212bbde6f1211f7f827c9c5f872e | 2021-09-10T14:01:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"rnd",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-rnd-fr | 8 | null | transformers | 12,925 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-rnd-fr
* source languages: rnd
* target languages: fr
* OPUS readme: [rnd-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.fr | 22.1 | 0.392 |
|
Helsinki-NLP/opus-mt-srn-es | aa7c939ea7c6e4543d7844ca13ef0745a809effa | 2021-09-10T14:04:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"srn",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-srn-es | 8 | null | transformers | 12,926 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-srn-es
* source languages: srn
* target languages: es
* OPUS readme: [srn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.es | 30.4 | 0.481 |
|
Helsinki-NLP/opus-mt-ssp-es | f35be270e9fa9ba6b970a735a4f5efc9f9055a4b | 2021-09-10T14:04:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ssp",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ssp-es | 8 | null | transformers | 12,927 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ssp-es
* source languages: ssp
* target languages: es
* OPUS readme: [ssp-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ssp-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ssp.es | 89.7 | 0.930 |
|
Helsinki-NLP/opus-mt-st-fr | b10ae7351feb17fc5a410f77919cb3b0e6595b92 | 2021-09-10T14:05:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"st",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-st-fr | 8 | null | transformers | 12,928 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-st-fr
* source languages: st
* target languages: fr
* OPUS readme: [st-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.fr | 30.7 | 0.490 |
|
Helsinki-NLP/opus-mt-sv-chk | d52a50ce8c83a882a86283fbcf787f611c192afe | 2021-09-10T14:05:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"chk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-chk | 8 | null | transformers | 12,929 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-chk
* source languages: sv
* target languages: chk
* OPUS readme: [sv-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-chk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-chk/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-chk/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-chk/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.chk | 20.7 | 0.421 |
|
Helsinki-NLP/opus-mt-sv-gaa | fe2481a49dbc98e532950b90a28ed00f5b477513 | 2021-09-10T14:06:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"gaa",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-gaa | 8 | null | transformers | 12,930 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-gaa
* source languages: sv
* target languages: gaa
* OPUS readme: [sv-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-gaa/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-gaa/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-gaa/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.gaa | 31.3 | 0.522 |
|
Helsinki-NLP/opus-mt-sv-guw | 92254cdb7d0a263a0254df44f4775b7fe48cee6b | 2021-09-10T14:06:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"guw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-guw | 8 | null | transformers | 12,931 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-guw
* source languages: sv
* target languages: guw
* OPUS readme: [sv-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-guw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-guw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-guw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.guw | 33.5 | 0.531 |
|
Helsinki-NLP/opus-mt-sv-ht | 185dab77bcebb5b6fbf8b52e975e15709669ca26 | 2021-09-10T14:07:06.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"ht",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-ht | 8 | null | transformers | 12,932 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ht
* source languages: sv
* target languages: ht
* OPUS readme: [sv-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ht/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ht/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ht/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ht | 28.0 | 0.457 |
|
Helsinki-NLP/opus-mt-sv-iso | 5422172dc52c13c230563771d764511ecfb4d747 | 2021-09-10T14:07:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"iso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-iso | 8 | null | transformers | 12,933 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-iso
* source languages: sv
* target languages: iso
* OPUS readme: [sv-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-iso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-iso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-iso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.iso | 27.2 | 0.447 |
|
Helsinki-NLP/opus-mt-sv-nso | da1be4827386ca61c54fc15de172288d86bbc2c3 | 2021-09-10T14:08:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"nso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-nso | 8 | null | transformers | 12,934 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-nso
* source languages: sv
* target languages: nso
* OPUS readme: [sv-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-nso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.nso | 37.9 | 0.575 |
|
Helsinki-NLP/opus-mt-sv-st | 49521390eed06df58ed778b18241dd7368ba4280 | 2021-09-10T14:09:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"st",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-st | 8 | null | transformers | 12,935 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-st
* source languages: sv
* target languages: st
* OPUS readme: [sv-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-st/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-st/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-st/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.st | 38.8 | 0.584 |
|
Helsinki-NLP/opus-mt-tll-fr | 04ac8891ee8b81ce53acba208384cb1f57d093f9 | 2021-09-11T10:48:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tll",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tll-fr | 8 | null | transformers | 12,936 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tll-fr
* source languages: tll
* target languages: fr
* OPUS readme: [tll-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.fr | 25.2 | 0.426 |
|
Helsinki-NLP/opus-mt-tn-sv | 64bed9a37dca4467911aacfd92327b9530510c69 | 2021-09-11T10:48:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tn",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tn-sv | 8 | null | transformers | 12,937 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tn-sv
* source languages: tn
* target languages: sv
* OPUS readme: [tn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.sv | 32.0 | 0.508 |
|
Helsinki-NLP/opus-mt-tw-sv | 332ce1c43149694a678af64a77024918689e726c | 2021-09-11T10:50:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tw",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tw-sv | 8 | null | transformers | 12,938 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tw-sv
* source languages: tw
* target languages: sv
* OPUS readme: [tw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.sv | 29.0 | 0.471 |
|
Helsinki-NLP/opus-mt-uk-nl | 564299278455433557c04ae365b2420fdf86ae81 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-nl | 8 | null | transformers | 12,939 | ---
language:
- uk
- nl
tags:
- translation
license: apache-2.0
---
### ukr-nld
* source group: Ukrainian
* target group: Dutch
* OPUS readme: [ukr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nld/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.nld | 48.7 | 0.656 |
### System Info:
- hf_name: ukr-nld
- source_languages: ukr
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'nl']
- src_constituents: {'ukr'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: nld
- short_pair: uk-nl
- chrF2_score: 0.6559999999999999
- bleu: 48.7
- brevity_penalty: 0.985
- ref_len: 59943.0
- src_name: Ukrainian
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: nl
- prefer_old: False
- long_pair: ukr-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-yo-fr | 2dedb0874c933a299a2718fae31376941d392a96 | 2021-09-11T10:52:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"yo",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-yo-fr | 8 | null | transformers | 12,940 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-fr
* source languages: yo
* target languages: fr
* OPUS readme: [yo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.fr | 24.1 | 0.408 |
|
Helsinki-NLP/opus-mt-zlw-en | 31425dba92443342042ba8bdca4e6da9756c6c1f | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pl",
"cs",
"zlw",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zlw-en | 8 | null | transformers | 12,941 | ---
language:
- pl
- cs
- zlw
- en
tags:
- translation
license: apache-2.0
---
### zlw-eng
* source group: West Slavic languages
* target group: English
* OPUS readme: [zlw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-eng/README.md)
* model: transformer
* source language(s): ces csb_Latn dsb hsb pol
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-ceseng.ces.eng | 25.7 | 0.536 |
| newstest2009-ceseng.ces.eng | 24.6 | 0.530 |
| newstest2010-ceseng.ces.eng | 25.0 | 0.540 |
| newstest2011-ceseng.ces.eng | 25.9 | 0.539 |
| newstest2012-ceseng.ces.eng | 24.8 | 0.533 |
| newstest2013-ceseng.ces.eng | 27.8 | 0.551 |
| newstest2014-csen-ceseng.ces.eng | 30.3 | 0.585 |
| newstest2015-encs-ceseng.ces.eng | 27.5 | 0.542 |
| newstest2016-encs-ceseng.ces.eng | 29.1 | 0.564 |
| newstest2017-encs-ceseng.ces.eng | 26.0 | 0.537 |
| newstest2018-encs-ceseng.ces.eng | 27.3 | 0.544 |
| Tatoeba-test.ces-eng.ces.eng | 53.3 | 0.691 |
| Tatoeba-test.csb-eng.csb.eng | 10.2 | 0.313 |
| Tatoeba-test.dsb-eng.dsb.eng | 11.7 | 0.296 |
| Tatoeba-test.hsb-eng.hsb.eng | 24.6 | 0.426 |
| Tatoeba-test.multi.eng | 51.8 | 0.680 |
| Tatoeba-test.pol-eng.pol.eng | 50.4 | 0.667 |
### System Info:
- hf_name: zlw-eng
- source_languages: zlw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'cs', 'zlw', 'en']
- src_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zlw
- tgt_alpha3: eng
- short_pair: zlw-en
- chrF2_score: 0.68
- bleu: 51.8
- brevity_penalty: 0.9620000000000001
- ref_len: 75742.0
- src_name: West Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zlw
- tgt_alpha2: en
- prefer_old: False
- long_pair: zlw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
HeyLucasLeao/gpt-neo-small-emo-lyrics | 31d33a826154409f3b6da1d61c72ff1143e98c50 | 2021-08-19T14:07:03.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | HeyLucasLeao | null | HeyLucasLeao/gpt-neo-small-emo-lyrics | 8 | null | transformers | 12,942 | Create README.md
## Emo Bot
#### Model Description
This is a finetuned version from GPT-Neo-125M for Generating Music Lyrics by Emo Genre.
#### Training data
It was trained with 2381 songs by 15 bands that were important to emo culture in the early 2000s, not necessary directly playing on the genre.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library.
##### Learning Rate: **2e-4**
##### Epochs: **40**
##### Colab for Finetuning: https://colab.research.google.com/drive/1jwTYI1AygQf7FV9vCHTWA4Gf5i--sjsD?usp=sharing
##### Colab for Testing: https://colab.research.google.com/drive/1wSP4Wyr1-DTTNQbQps_RCO3ThhH-eeZc?usp=sharing
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
import re
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/gpt-neo-small-emo-lyrics")
model = AutoModelForCausalLM.from_pretrained("HeyLucasLeao/gpt-neo-small-emo-lyrics")
model.to('cuda')
generated = tokenizer('I miss you',return_tensors='pt').input_ids.cuda()
#Generating texts
sample_outputs = model.generate(generated,
# Use sampling instead of greedy decoding
do_sample=True,
# Keep only top 3 token with the highest probability
top_k=10,
# Maximum sequence length
max_length=200,
# Keep only the most probable tokens with cumulative probability of 95%
top_p=0.95,
# Changes randomness of generated sequences
temperature=2.,
# Number of sequences to generate
num_return_sequences=3)
# Decoding and printing sequences
for i, sample_output in enumerate(sample_outputs):
texto = tokenizer.decode(sample_output.tolist())
regex_padding = re.sub('<|pad|>', '', texto)
regex_barra = re.sub('[|+]', '', regex_padding)
espaço = re.sub('[ +]', ' ', regex_barra)
resultado = re.sub('[\n](2, )', '\n', espaço)
print(">> Text {}: {}".format(i+1, resultado + '\n'))
""">> Texto 1: I miss you
I miss you more than anything
And if you change your mind
I do it like a change of mind
I always do it like theeah
Everybody wants a surprise
Everybody needs to stay collected
I keep your locked and numbered
Use this instead: Run like the wind
Use this instead: Run like the sun
And come back down: You've been replaced
Don't want to be the same
Tomorrow
I don't even need your name
The message is on the way
make it while you're holding on
It's better than it is
Everything more security than a parade
Im getting security
angs the world like a damned soul
We're hanging on a queue
and the truth is on the way
Are you listening?
We're getting security
Send me your soldiers
We're getting blood on"""
""">> Texto 2: I miss you
And I could forget your name
All the words we'd hear
You miss me
I need you
And I need you
You were all by my side
When we'd talk to no one
And I
Just to talk to you
It's easier than it has to be
Except for you
You missed my know-all
You meant to hug me
And I
Just want to feel you touch me
We'll work up
Something wild, just from the inside
Just get closer to me
I need you
You were all by my side
When we*d talk to you
, you better admit
That I'm too broken to be small
You're part of me
And I need you
But I
Don't know how
But I know I need you
Must"""
""">> Texto 3: I miss you
And I can't lie
Inside my head
All the hours you've been through
If I could change your mind
I would give it all away
And I'd give it all away
Just to give it away
To you
Now I wish that I could change
Just to you
I miss you so much
If I could change
So much
I'm looking down
At the road
The one that's already been
Searching for a better way to go
So much I need to see it clear
topk wish me an ehive
I wish I wish I wish I knew
I can give well
In this lonely night
The lonely night
I miss you
I wish it well
If I could change
So much
I need you"""
``` |
Huntersx/cola_model | 7b85a3e51346c31df81c9148a211b823f897df97 | 2021-05-18T21:06:41.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Huntersx | null | Huntersx/cola_model | 8 | null | transformers | 12,943 | Entry not found |
Iacopo/Shakespear-GPT2 | 57429ba642a0dc74903cce707892d3c4b245fc92 | 2022-01-25T13:35:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | Iacopo | null | Iacopo/Shakespear-GPT2 | 8 | null | transformers | 12,944 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a dataset of Shakespeare's plays.
## Model description
The model is the original gpt-2 model fine-tuned on a custom dataset.
## Intended uses & limitations
The model can be used to generate Shakespearean-like text. Consider that because it comes from plays, such a typographical structure might be reproduced.
## Training and evaluation data
Trained with Shakespeare's plays corpus.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.11.0
|
ItcastAI/bert_finetunning_test | 3bde69c884dca4877f664cdf151fb0a9f03df22c | 2021-05-18T21:13:27.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ItcastAI | null | ItcastAI/bert_finetunning_test | 8 | null | transformers | 12,945 | Entry not found |
ItuThesis2022MlviNikw/deberta-v3-base | 036a91a3a3ec08435b1c9e995912f621c815ca4b | 2021-11-29T10:43:35.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers"
]
| text-classification | false | ItuThesis2022MlviNikw | null | ItuThesis2022MlviNikw/deberta-v3-base | 8 | null | transformers | 12,946 | Entry not found |
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly | 8c326b8f4e32c7c92c5ccce4a2fa88552ffb6d45 | 2021-12-07T15:55:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | Jeska | null | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly | 8 | null | transformers | 12,947 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly
This model is a fine-tuned version of [outputDAQonly/checkpoint-8710](https://huggingface.co/outputDAQonly/checkpoint-8710) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5008
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.0751 | 1.0 | 1320 | 3.1674 | 0.4086 |
| 2.5619 | 2.0 | 2640 | 2.0335 | 0.6426 |
| 1.8549 | 3.0 | 3960 | 1.3537 | 0.7861 |
| 1.106 | 4.0 | 5280 | 0.9515 | 0.8519 |
| 0.6698 | 5.0 | 6600 | 0.7152 | 0.8757 |
| 0.4497 | 6.0 | 7920 | 0.5838 | 0.8921 |
| 0.2626 | 7.0 | 9240 | 0.5300 | 0.8940 |
| 0.1762 | 8.0 | 10560 | 0.4984 | 0.8958 |
| 0.119 | 9.0 | 11880 | 0.4906 | 0.9059 |
| 0.0919 | 10.0 | 13200 | 0.4896 | 0.8995 |
| 0.0722 | 11.0 | 14520 | 0.5012 | 0.9022 |
| 0.0517 | 12.0 | 15840 | 0.4951 | 0.9040 |
| 0.0353 | 13.0 | 17160 | 0.4988 | 0.9040 |
| 0.0334 | 14.0 | 18480 | 0.5035 | 0.9049 |
| 0.0304 | 15.0 | 19800 | 0.5008 | 0.9068 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
LysandreJik/test-upload | cd511825e20f543b82535d6ef30bfecd107ff391 | 2022-01-28T16:56:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | LysandreJik | null | LysandreJik/test-upload | 8 | null | transformers | 12,948 | Entry not found |
KBLab/albert-base-swedish-cased-alpha | c5f8b9805e0f6a30d7b8bcd63d1371fa73f395ff | 2022-07-28T14:08:17.000Z | [
"pytorch",
"albert",
"sv",
"transformers"
]
| null | false | KBLab | null | KBLab/albert-base-swedish-cased-alpha | 8 | null | transformers | 12,949 | ---
language: sv
---
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KBLab/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KB/bert-base-swedish-cased-ner', tokenizer='KB/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formatted):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KBLab/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
|
KoichiYasuoka/roberta-base-thai-spm | 335a1cfcf222d9da58e2137849efec2605ebf5b2 | 2022-07-16T15:48:22.000Z | [
"pytorch",
"roberta",
"fill-mask",
"th",
"transformers",
"thai",
"masked-lm",
"wikipedia",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-thai-spm | 8 | null | transformers | 12,950 | ---
language:
- "th"
tags:
- "thai"
- "masked-lm"
- "wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# roberta-base-thai-spm
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune `roberta-base-thai-spm` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-spm")
```
|
KoichiYasuoka/roberta-base-thai-syllable | 312aee1824957371e6ab0552a7f7d701d4bb4d49 | 2021-09-16T13:22:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"th",
"transformers",
"thai",
"masked-lm",
"wikipedia",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-thai-syllable | 8 | null | transformers | 12,951 | ---
language:
- "th"
tags:
- "thai"
- "masked-lm"
- "wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "<mask>"
widget:
- text: "แผนกนี้กำลัง<mask>กับความท้าทายใหม่"
---
# roberta-base-thai-syllable
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts, derived from [wangchanberta-base-wiki-syllable](https://huggingface.co/airesearch/wangchanberta-base-wiki-syllable). Character-embeddings are modified to use BertTokenizerFast. You can fine-tune `roberta-base-thai-syllable` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-syllable")
```
|
LeBenchmark/wav2vec2-FR-1K-base | f3f865bff01e834613753ff782cdc90771680c6c | 2021-11-30T04:22:15.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"transformers",
"license:apache-2.0"
]
| feature-extraction | false | LeBenchmark | null | LeBenchmark/wav2vec2-FR-1K-base | 8 | null | transformers | 12,952 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 base model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark](https://openreview.net/pdf?id=TSvj5dmuSd)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short:
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@article{Evain2021LeBenchmarkAR,
title={LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech},
author={Sol{\`e}ne Evain and Ha Nguyen and Hang Le and Marcely Zanon Boito and Salima Mdhaffar and Sina Alisamir and Ziyi Tong and N. Tomashenko and Marco Dinarelli and Titouan Parcollet and A. Allauzen and Y. Est{\`e}ve and B. Lecouteux and F. Portet and S. Rossato and F. Ringeval and D. Schwab and L. Besacier},
journal={ArXiv},
year={2021},
volume={abs/2104.11462}
}
```
|
LegolasTheElf/Wav2Vec2_xls_r_lm_300m_hi | f5d3dcae290aefd01439ec5acd5f02cf5c1d09f5 | 2022-03-23T18:33:41.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"Openslr Multilingual",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_xls_r_lm_300m_hi | 8 | null | transformers | 12,953 | ---
language:
- hi
license: apache-2.0
tags:
- Openslr Multilingual
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2_xls_r_300m_hi_final
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 34.21
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 |
LeoCordoba/beto2beto-cc-news-es-titles | 75342b7eb65540174cb71ea38eb6d2832ede72b9 | 2021-09-08T17:15:01.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | LeoCordoba | null | LeoCordoba/beto2beto-cc-news-es-titles | 8 | null | transformers | 12,954 | \n---
language: es
tags:
- summarization
- spanish
- beto2beto
- encoder-decoder
license: apache-2.0
datasets:
- LeoCordoba/CC-NEWS-ES-titles
model-index:
- name: beto2beto-ccnews-titles-es
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "CCNEWS-ES-titles"
type: LeoCordoba/CC-NEWS-ES-titles
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 23.7478
- name: Validation ROGUE-2
type: rogue-2
value: 7.3616
- name: Validation ROGUE-L
type: rogue-l
value: 20.6615
- name: Validation ROGUE-Lsum
type: rogue-lsum
value: 20.7371
- name: Test ROGUE-1
type: rogue-1
value: 23.7415
- name: Test ROGUE-2
type: rogue-2
value: 7.3548
- name: Test ROGUE-L
type: rogue-l
value: 20.746
- name: Test ROGUE-Lsum
type: rogue-lsum
value: 20.8149
widget:
- text: |
La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno“, los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña.
---
## Hyperparameters
{
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 256
}
## Usage
## Results
| key | value |
| --- | ----- |
| eval loss | 4.539857387542725|
| eval_rouge1 |23.7478 |
| eval_rouge2 |7.3616 |
| eval_rougeL |20.6615 |
| eval_rougeLsum |20.7371 |
| eval_gen_len| 16.1806|
|test loss | 4.515065670013428|
| test_rouge1 | 23.7415|
| test_rouge2 | 7.3548|
| test_rougeL | 20.746|
| test_rougeLsum | 20.8149|
| test_gen_len| 16.1926|
|
Li/bert-base-uncased-qnli | 009f3c2d7db527527bd176e343cd5ce6fe4da0ae | 2021-09-23T16:45:03.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Li | null | Li/bert-base-uncased-qnli | 8 | null | transformers | 12,955 | [bert-base-uncased](https://huggingface.co/bert-base-uncased) fine-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on 2x NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=8
gradient_accumulation_steps=2
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.916895
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli) |
LucasS/robertaBaseABSA | 2c36eff44769de4a591ff18d14c47455f8023210 | 2021-09-02T17:02:03.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | LucasS | null | LucasS/robertaBaseABSA | 8 | null | transformers | 12,956 | Entry not found |
Luciano/gpt2-small-portuguese-finetuned-peticoes | 01181a583f77daf24224f4938892100f942145f4 | 2022-02-18T10:19:55.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"pt",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | Luciano | null | Luciano/gpt2-small-portuguese-finetuned-peticoes | 8 | null | transformers | 12,957 | ---
language:
- pt
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-portuguese-finetuned-peticoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-portuguese-finetuned-peticoes
This model is a fine-tuned version of [pierreguillou/gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 404 | 3.5455 |
| 3.8364 | 2.0 | 808 | 3.4326 |
| 3.4816 | 3.0 | 1212 | 3.4062 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Lumos/yahoo2 | 47ad88c8fd4e8b36255c246862fb9305980ce884 | 2022-01-01T03:19:20.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Lumos | null | Lumos/yahoo2 | 8 | null | transformers | 12,958 | Entry not found |
M47Labs/arabert_multiclass_news | 7be4b01afba48a5fdda69b3be3eeda6ebc01344a | 2021-12-29T12:56:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | M47Labs | null | M47Labs/arabert_multiclass_news | 8 | null | transformers | 12,959 | Entry not found |
Maha/OGBV-gender-twtrobertabase-en-davidson | de047cfe28fc39832124fe4c916cc5c4e15f0afc | 2022-02-10T05:34:54.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Maha | null | Maha/OGBV-gender-twtrobertabase-en-davidson | 8 | null | transformers | 12,960 | Entry not found |
Media1129/keyword-tag-model-8000-9-16_more_ingredient | 6209fa0f96c9e3eacf4bf36a0106365074643ca3 | 2021-09-17T02:34:08.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-8000-9-16_more_ingredient | 8 | null | transformers | 12,961 | Entry not found |
MickyMike/7-GPT2SP-jirasoftware | c242593601de6b4858581b956048703ea48fbade | 2021-08-30T18:29:21.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MickyMike | null | MickyMike/7-GPT2SP-jirasoftware | 8 | null | transformers | 12,962 | Entry not found |
NDugar/v2xl-again-mnli | 267a390f88cb4b8bdb56066f95ba55d81a34f91f | 2021-12-22T20:20:12.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta-v1",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
]
| zero-shot-classification | false | NDugar | null | NDugar/v2xl-again-mnli | 8 | null | transformers | 12,963 | ---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa large model fine-tuned with MNLI task.
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\n--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\n--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` |
NYTK/translation-bart-hu-en | 286ce59a2e57eb2961b48430e8a63395d50ed568 | 2022-02-14T13:28:13.000Z | [
"pytorch",
"bart",
"text2text-generation",
"hu",
"en",
"transformers",
"translation",
"license:gpl",
"autotrain_compatible"
]
| translation | false | NYTK | null | NYTK/translation-bart-hu-en | 8 | null | transformers | 12,964 | ---
language:
- hu
- en
tags:
- translation
license: gpl
metrics:
- sacrebleu
- chrf
widget:
- text: "Szeretném megragadni az alkalmat uram, hogy az engedélyét kérjem, hogy találkozhassak a lányával."
---
# BART Translation model
For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Source language: Hungarian
- Target language: English
- Pretrained on English WikiText-103 and Hungarian Wikipedia
- Finetuned on subcorpora from OPUS
- Segments: 56.837.602
## Limitations
- tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy))
## Results
| Model | BLEU | chrF-3 |
| ------------- | ------------- | ------------- |
| Google en-hu | 25.30 | 54.08 |
| **BART-base-enhu** | **34.38** | **58.88** |
| Google hu-en| 34.48 | 59.59 |
| **BART-base-huen** | **38.03** | **61,37** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {{Yang Zijian Győző}},
pages = {15--29}
}
``` |
NbAiLab/roberta_NCC_des_128 | 25ccad90aef0d036a657db364bc0a9af962baa31 | 2022-01-04T15:39:34.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | NbAiLab | null | NbAiLab/roberta_NCC_des_128 | 8 | null | transformers | 12,965 | Just for performing some experiments. Do not use.
|
Neuralearn/autonlp-Summarization-AutoNLP-24135330 | 64eb31f79dc0bdfbadd1b31b33040796738dcd2b | 2021-10-21T21:44:05.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:Neuralearn/autonlp-data-Summarization-AutoNLP",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | Neuralearn | null | Neuralearn/autonlp-Summarization-AutoNLP-24135330 | 8 | null | transformers | 12,966 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Neuralearn/autonlp-data-Summarization-AutoNLP
co2_eq_emissions: 155.8470724053265
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 24135330
- CO2 Emissions (in grams): 155.8470724053265
## Validation Metrics
- Loss: 1.369327425956726
- Rouge1: 52.6656
- Rouge2: 30.5879
- RougeL: 40.1268
- RougeLsum: 47.4438
- Gen Len: 75.4625
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Neuralearn/autonlp-Summarization-AutoNLP-24135330
``` |
Norod78/hebrew_stories-gpt_neo-small | 34fc687756168731925b364de336a06ccf2831d7 | 2022-07-04T07:27:13.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"he",
"transformers",
"license:mit"
]
| text-generation | false | Norod78 | null | Norod78/hebrew_stories-gpt_neo-small | 8 | null | transformers | 12,967 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "תריסר מכשפות סג"
- text: "\n\nהאיש האחרון בעולם /"
- text: "פעם אחת, לפני שנים רבות"
- text: "הרמיוני הסתירה את"
- text: "לפתע, אור ירוק"
license: mit
---
# hebrew_stories-gpt_neo-small
Hebrew story-text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo).
## Dataset
Text from various Hebrew books
|
Parsa/BBB_prediction_classification_SMILES | 4671f57c206cc150f94a982616c26b76cc95048b | 2022-02-23T07:41:24.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Parsa | null | Parsa/BBB_prediction_classification_SMILES | 8 | null | transformers | 12,968 | A fine-tuned model based on'DeepChem/ChemBERTa-77M-MLM'for Blood brain barrier permeability prediction based on SMILES string. There are also BiLSTM models available as well as these two models in 'https://github.com/mephisto121/BBBNLP if you want to check them all and check the codes too.
[](https://colab.research.google.com/drive/1jGYf3sq93yO4EbgVaEl3nlClrVatVaXS#scrollTo=AMEdQItmilAw) |
Plim/xls-r-300m-cv_8-fr | 799632ebe1927aed043c77458bd695664ff11dae | 2022-02-09T13:59:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Plim | null | Plim/xls-r-300m-cv_8-fr | 8 | null | transformers | 12,969 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
model-index:
- name: XLS-R-300m - French
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: to recompute with STEP 24000
- name: Test CER
type: cer
value: to recompute with STEP 24000
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Test WER
type: wer
value: 35.29
- name: Test CER
type: cer
value: 13.94
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0 (extended to 7.0 with training with checkpoint)
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9114 | 0.29 | 1000 | inf | 0.9997 |
| 1.2436 | 0.57 | 2000 | inf | 0.4310 |
| 1.0552 | 0.86 | 3000 | inf | 0.3144 |
| 1.0044 | 1.15 | 4000 | inf | 0.2814 |
| 0.9718 | 1.43 | 5000 | inf | 0.2658 |
| 0.9502 | 1.72 | 6000 | inf | 0.2566 |
| 0.9418 | 2.01 | 7000 | inf | 0.2476 |
| 0.9215 | 2.29 | 8000 | inf | 0.2420 |
| 0.9236 | 2.58 | 9000 | inf | 0.2388 |
| 0.9014 | 2.87 | 10000 | inf | 0.2354 |
| 0.8814 | 3.15 | 11000 | inf | 0.2312 |
| 0.8809 | 3.44 | 12000 | inf | 0.2285 |
| 0.8717 | 3.73 | 13000 | inf | 0.2263 |
| 0.8787 | 4.01 | 14000 | inf | 0.2218 |
| 0.8567 | 4.3 | 15000 | inf | 0.2193 |
| 0.8488 | 4.59 | 16000 | inf | 0.2187 |
| 0.8359 | 4.87 | 17000 | inf | 0.2172 |
Training continued with checkpoint from STEP 17000:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| / | 5.16 | 18000 | inf | 0.2176 |
| / | 5.45 | 19000 | inf | 0.2181 |
| / | 5.73 | 20000 | inf | 0.2155 |
| / | 6.02 | 21000 | inf | 0.2140 |
| / | 6.31 | 22000 | inf | 0.2124 |
| / | 6.59 | 23000 | inf | 0.2117 |
| / | 6.88 | 24000 | inf | 0.2116 |
It achieves the best result on the validation set on Step 24000:
- Wer: 0.2116
Got some issue with validation loss calculation.
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8` with split `test`
```bash
python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Plim/xls-r-300m-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
Plim/xls-r-300m-lm-fr | 9e32bde5d79cf50fc43c14bb9983e706f25ded3a | 2022-02-02T23:29:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | Plim | null | Plim/xls-r-300m-lm-fr | 8 | null | transformers | 12,970 | ---
language:
- fr
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-6000](https://huggingface.co/./checkpoint-6000) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2619
- Wer: 0.2457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.495 | 0.16 | 500 | 3.3883 | 1.0 |
| 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 |
| 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 |
| 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 |
| 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 |
| 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 |
| 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 |
| 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 |
| 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 |
| 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 |
| 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 |
| 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Proggleb/roberta-base-bne-finetuned-amazon_reviews_multi | 68b327ff95ba77740c938e477b1f9c3a81ae179c | 2021-08-26T20:21:41.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0"
]
| text-classification | false | Proggleb | null | Proggleb/roberta-base-bne-finetuned-amazon_reviews_multi | 8 | null | transformers | 12,971 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model_index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metric:
name: Accuracy
type: accuracy
value: 0.9185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3011
- Accuracy: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2427 | 1.0 | 125 | 0.2109 | 0.919 |
| 0.0986 | 2.0 | 250 | 0.3011 | 0.9185 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
SCORE/claim3a-distilbert-base-uncased | cd3f79bdc22f79009923253f18e70f2ecdf618a2 | 2021-12-14T16:48:58.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | SCORE | null | SCORE/claim3a-distilbert-base-uncased | 8 | null | transformers | 12,972 | Entry not found |
SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune | ca117e59d4c891d6b55ff8392cfe2bb96cc2b6a8 | 2021-06-23T09:56:52.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune | 8 | null | transformers | 12,973 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code comment generation java
Pretrained model on programming language java using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/code%20comment%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_commit_generation_multitask | da9071406bc8f9a8fb98c0aef25a7bf3d585bcb2 | 2021-06-23T10:14:38.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_commit_generation_multitask | 8 | null | transformers | 12,974 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/commit%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SetFit/deberta-v3-large__sst2__train-16-4 | 98f81ecb705cf9ce2fdfb57a2293efc558b015fe | 2022-02-10T10:48:30.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-4 | 8 | null | transformers | 12,975 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-4
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6329
- Accuracy: 0.6392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6945 | 1.0 | 7 | 0.7381 | 0.2857 |
| 0.7072 | 2.0 | 14 | 0.7465 | 0.2857 |
| 0.6548 | 3.0 | 21 | 0.7277 | 0.4286 |
| 0.5695 | 4.0 | 28 | 0.6738 | 0.5714 |
| 0.4615 | 5.0 | 35 | 0.8559 | 0.5714 |
| 0.0823 | 6.0 | 42 | 1.0983 | 0.5714 |
| 0.0274 | 7.0 | 49 | 1.9937 | 0.5714 |
| 0.0106 | 8.0 | 56 | 2.2209 | 0.5714 |
| 0.0039 | 9.0 | 63 | 2.2114 | 0.5714 |
| 0.0031 | 10.0 | 70 | 2.2808 | 0.5714 |
| 0.0013 | 11.0 | 77 | 2.3707 | 0.5714 |
| 0.0008 | 12.0 | 84 | 2.4902 | 0.5714 |
| 0.0005 | 13.0 | 91 | 2.5208 | 0.5714 |
| 0.0007 | 14.0 | 98 | 2.5683 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-7 | 7db282ab8985b1a0f0d3c32b850e78313443ec34 | 2022-02-10T11:08:09.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-7 | 8 | null | transformers | 12,976 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-7
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.5063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6911 | 1.0 | 7 | 0.7455 | 0.2857 |
| 0.6844 | 2.0 | 14 | 0.7242 | 0.2857 |
| 0.6137 | 3.0 | 21 | 0.7341 | 0.4286 |
| 0.3805 | 4.0 | 28 | 1.0217 | 0.4286 |
| 0.2201 | 5.0 | 35 | 1.1437 | 0.2857 |
| 0.0296 | 6.0 | 42 | 1.5997 | 0.4286 |
| 0.0103 | 7.0 | 49 | 2.6835 | 0.4286 |
| 0.0046 | 8.0 | 56 | 3.3521 | 0.4286 |
| 0.002 | 9.0 | 63 | 3.7846 | 0.4286 |
| 0.0017 | 10.0 | 70 | 4.0088 | 0.4286 |
| 0.0018 | 11.0 | 77 | 4.1483 | 0.4286 |
| 0.0006 | 12.0 | 84 | 4.2235 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-8 | 66156307a5e47bd0f40e072baa7bc7a801ebcea5 | 2022-02-10T07:58:12.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-8 | 8 | null | transformers | 12,977 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0704
- Accuracy: 0.394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1031 | 1.0 | 10 | 1.1286 | 0.1 |
| 1.0648 | 2.0 | 20 | 1.1157 | 0.3 |
| 0.9982 | 3.0 | 30 | 1.1412 | 0.2 |
| 0.9283 | 4.0 | 40 | 1.2053 | 0.2 |
| 0.7958 | 5.0 | 50 | 1.1466 | 0.2 |
| 0.6668 | 6.0 | 60 | 1.1783 | 0.3 |
| 0.5068 | 7.0 | 70 | 1.2992 | 0.3 |
| 0.3741 | 8.0 | 80 | 1.3483 | 0.3 |
| 0.1653 | 9.0 | 90 | 1.4533 | 0.2 |
| 0.0946 | 10.0 | 100 | 1.6292 | 0.2 |
| 0.0569 | 11.0 | 110 | 1.8381 | 0.2 |
| 0.0346 | 12.0 | 120 | 2.0781 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-9 | 9807860680c06e74ac0e9b51eb816a0939c4e4ba | 2022-02-10T07:36:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-32-9 | 8 | null | transformers | 12,978 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5625
- Accuracy: 0.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 13 | 0.6805 | 0.5385 |
| 0.6642 | 2.0 | 26 | 0.6526 | 0.7692 |
| 0.5869 | 3.0 | 39 | 0.5773 | 0.8462 |
| 0.4085 | 4.0 | 52 | 0.4959 | 0.8462 |
| 0.2181 | 5.0 | 65 | 0.4902 | 0.6923 |
| 0.069 | 6.0 | 78 | 0.5065 | 0.8462 |
| 0.0522 | 7.0 | 91 | 0.6082 | 0.7692 |
| 0.0135 | 8.0 | 104 | 0.6924 | 0.7692 |
| 0.0084 | 9.0 | 117 | 0.5921 | 0.7692 |
| 0.0061 | 10.0 | 130 | 0.6477 | 0.7692 |
| 0.0047 | 11.0 | 143 | 0.6648 | 0.7692 |
| 0.0035 | 12.0 | 156 | 0.6640 | 0.7692 |
| 0.0031 | 13.0 | 169 | 0.6615 | 0.7692 |
| 0.0029 | 14.0 | 182 | 0.6605 | 0.7692 |
| 0.0026 | 15.0 | 195 | 0.6538 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Sofiascope/amazon-fine-tuned-wm | 9693982b5bf63a9ebf39227f4ae6d0f25732ebd9 | 2021-12-28T12:25:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sofiascope | null | Sofiascope/amazon-fine-tuned-wm | 8 | null | transformers | 12,979 | Entry not found |
StevenLimcorn/wav2vec2-xls-r-300m-zh-TW | c65cec854f6909b073df46ab266140d5bdd059ed | 2022-02-06T21:57:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"zh-TW",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | StevenLimcorn | null | StevenLimcorn/wav2vec2-xls-r-300m-zh-TW | 8 | null | transformers | 12,980 | ---
language:
- zh-TW
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-TW dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1786
- Wer: 0.8594
- Cer: 0.2964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 64.6189 | 2.51 | 500 | 63.8077 | 1.0 | 1.0 |
| 8.0561 | 5.03 | 1000 | 6.8014 | 1.0 | 1.0 |
| 6.0427 | 7.54 | 1500 | 6.0745 | 1.0 | 1.0 |
| 5.9357 | 10.05 | 2000 | 5.8682 | 1.0 | 1.0 |
| 5.0489 | 12.56 | 2500 | 4.4032 | 0.9990 | 0.7750 |
| 4.6184 | 15.08 | 3000 | 3.8383 | 0.9983 | 0.6768 |
| 4.365 | 17.59 | 3500 | 3.4633 | 0.9959 | 0.6299 |
| 4.1026 | 20.1 | 4000 | 3.0732 | 0.9902 | 0.5814 |
| 3.8655 | 22.61 | 4500 | 2.7638 | 0.9868 | 0.5465 |
| 3.6991 | 25.13 | 5000 | 2.4759 | 0.9811 | 0.5088 |
| 3.4894 | 27.64 | 5500 | 2.2937 | 0.9746 | 0.4852 |
| 3.3983 | 30.15 | 6000 | 2.1684 | 0.9733 | 0.4674 |
| 3.2736 | 32.66 | 6500 | 2.0372 | 0.9659 | 0.4458 |
| 3.1884 | 35.18 | 7000 | 1.9267 | 0.9648 | 0.4329 |
| 3.1248 | 37.69 | 7500 | 1.8408 | 0.9591 | 0.4217 |
| 3.0381 | 40.2 | 8000 | 1.7531 | 0.9503 | 0.4074 |
| 2.9515 | 42.71 | 8500 | 1.6880 | 0.9459 | 0.3967 |
| 2.8704 | 45.23 | 9000 | 1.6264 | 0.9378 | 0.3884 |
| 2.8128 | 47.74 | 9500 | 1.5621 | 0.9341 | 0.3782 |
| 2.7386 | 50.25 | 10000 | 1.5011 | 0.9243 | 0.3664 |
| 2.6646 | 52.76 | 10500 | 1.4608 | 0.9192 | 0.3575 |
| 2.6072 | 55.28 | 11000 | 1.4251 | 0.9148 | 0.3501 |
| 2.569 | 57.79 | 11500 | 1.3837 | 0.9060 | 0.3462 |
| 2.5091 | 60.3 | 12000 | 1.3589 | 0.9070 | 0.3392 |
| 2.4588 | 62.81 | 12500 | 1.3261 | 0.8966 | 0.3284 |
| 2.4083 | 65.33 | 13000 | 1.3052 | 0.8982 | 0.3265 |
| 2.3787 | 67.84 | 13500 | 1.2997 | 0.8908 | 0.3243 |
| 2.3457 | 70.35 | 14000 | 1.2778 | 0.8898 | 0.3187 |
| 2.3099 | 72.86 | 14500 | 1.2661 | 0.8830 | 0.3172 |
| 2.2559 | 75.38 | 15000 | 1.2475 | 0.8851 | 0.3143 |
| 2.2264 | 77.89 | 15500 | 1.2319 | 0.8739 | 0.3085 |
| 2.196 | 80.4 | 16000 | 1.2218 | 0.8722 | 0.3049 |
| 2.1613 | 82.91 | 16500 | 1.2093 | 0.8719 | 0.3051 |
| 2.1455 | 85.43 | 17000 | 1.2055 | 0.8624 | 0.3005 |
| 2.1193 | 87.94 | 17500 | 1.1975 | 0.8600 | 0.2982 |
| 2.0911 | 90.45 | 18000 | 1.1960 | 0.8648 | 0.3003 |
| 2.0884 | 92.96 | 18500 | 1.1871 | 0.8638 | 0.2971 |
| 2.0766 | 95.48 | 19000 | 1.1814 | 0.8617 | 0.2967 |
| 2.0735 | 97.99 | 19500 | 1.1801 | 0.8621 | 0.2969 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
StivenLancheros/mBERT-base-cased-NER-CONLL | 67d7ee529b58f75a207354439c573486090206ea | 2022-02-01T16:21:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2002",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/mBERT-base-cased-NER-CONLL | 8 | null | transformers | 12,981 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2002
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mBERT-base-cased-NER-CONLL
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2002
type: conll2002
args: es
metrics:
- name: Precision
type: precision
value: 0.8621083924079579
- name: Recall
type: recall
value: 0.8662683823529411
- name: F1
type: f1
value: 0.8641833810888252
- name: Accuracy
type: accuracy
value: 0.9790639230580277
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT-base-cased-NER-CONLL (EN-ES)
This model is a fine-tuned version of [bert-base-multilingual-cased ](https://huggingface.co/bert-base-multilingual-cased) on the conll2003 and conll2002 datasets. Training was performed separately.
It achieves the following results on the evaluation set:
Connll2003:
- Loss: 0.0585
- Precision: 0.9489
- Recall: 0.9541
- F1: 0.9515
- Accuracy: 0.9880
Conll2002:
- Loss: 0.1435
- Precision: 0.8621
- Recall: 0.8663
- F1: 0.8642
- Accuracy: 0.9791
## Model description
IOB tagging Scheme. PER/LOC/MISC/ORG tags
## Intended uses & limitations
More information needed
## Training and evaluation data
Conll2002/2003 (ES-EN)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
Conll2003:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1739 | 1.0 | 878 | 0.0741 | 0.9246 | 0.9181 | 0.9213 | 0.9823 |
| 0.045 | 2.0 | 1756 | 0.0586 | 0.9469 | 0.9476 | 0.9472 | 0.9870 |
| 0.0213 | 3.0 | 2634 | 0.0583 | 0.9503 | 0.9510 | 0.9506 | 0.9877 |
| 0.0113 | 4.0 | 3512 | 0.0585 | 0.9489 | 0.9541 | 0.9515 | 0.9880 |
Conll2002:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0739 | 1.0 | 4162 | 0.1322 | 0.8430 | 0.8267 | 0.8348 | 0.9741 |
| 0.0454 | 2.0 | 8324 | 0.1158 | 0.8664 | 0.8614 | 0.8639 | 0.9782 |
| 0.031 | 3.0 | 12486 | 0.1243 | 0.8521 | 0.8660 | 0.8590 | 0.9783 |
| 0.0136 | 4.0 | 16648 | 0.1435 | 0.8621 | 0.8663 | 0.8642 | 0.9791 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
StivenLancheros/roberta-base-bne-finetuned-ner | 4fda376e7722ce87649c7fff9bfb5526871ec7fc | 2021-11-08T13:41:04.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/roberta-base-bne-finetuned-ner | 8 | 1 | transformers | 12,982 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-bne-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9237957261861645
- name: Recall
type: recall
value: 0.9351077870655521
- name: F1
type: f1
value: 0.9294173377546188
- name: Accuracy
type: accuracy
value: 0.9847536857245595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-ner
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0659
- Precision: 0.9238
- Recall: 0.9351
- F1: 0.9294
- Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1931 | 1.0 | 878 | 0.0800 | 0.8892 | 0.8853 | 0.8872 | 0.9770 |
| 0.0409 | 2.0 | 1756 | 0.0655 | 0.9178 | 0.9238 | 0.9208 | 0.9828 |
| 0.0138 | 3.0 | 2634 | 0.0663 | 0.9207 | 0.9276 | 0.9241 | 0.9839 |
| 0.0051 | 4.0 | 3512 | 0.0659 | 0.9238 | 0.9351 | 0.9294 | 0.9848 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
StivenLancheros/xlm-roberta-base-finetuned-ner-false-finetuned-ner-2002-1 | 9ce31e8e41e7281f409231c347c535e808de58af | 2021-12-05T14:38:36.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/xlm-roberta-base-finetuned-ner-false-finetuned-ner-2002-1 | 8 | 1 | transformers | 12,983 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-ner-false-finetuned-ner-2002
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.941186271242919
- name: Recall
type: recall
value: 0.9506900033658701
- name: F1
type: f1
value: 0.945914266577361
- name: Accuracy
type: accuracy
value: 0.9904209337642615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner-false-finetuned-ner-2002
This model is a fine-tuned version of [StivenLancheros/xlm-roberta-base-finetuned-ner-false](https://huggingface.co/StivenLancheros/xlm-roberta-base-finetuned-ner-false) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0725
- Precision: 0.9412
- Recall: 0.9507
- F1: 0.9459
- Accuracy: 0.9904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.086 | 1.0 | 7021 | 0.0709 | 0.9221 | 0.9261 | 0.9241 | 0.9872 |
| 0.0352 | 2.0 | 14042 | 0.0871 | 0.9243 | 0.9354 | 0.9298 | 0.9879 |
| 0.0203 | 3.0 | 21063 | 0.0747 | 0.9398 | 0.9490 | 0.9444 | 0.9901 |
| 0.0184 | 4.0 | 28084 | 0.0725 | 0.9412 | 0.9507 | 0.9459 | 0.9904 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
T-Systems-onsite/cross-en-es-pt-roberta-sentence-transformer | f609cfe05b1332fcb44a73594ab4b2e11c99feab | 2022-06-28T19:56:15.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/cross-en-es-pt-roberta-sentence-transformer | 8 | null | transformers | 12,984 | Entry not found |
TehranNLP-org/bert-base-cased-avg-cola | 926e043f74219f164eb32f14ef771aafddcca623 | 2021-06-27T20:45:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-base-cased-avg-cola | 8 | null | transformers | 12,985 | The uploaded model is from epoch 4 with Matthews Correlation of 61.05
"best_metric": 0.4796141982078552,<br>
"best_model_checkpoint": "/content/output_dir/checkpoint-268",<br>
"epoch": 10.0,<br>
"global_step": 2680,<br>
"is_hyper_param_search": false,<br>
"is_local_process_zero": true,<br>
"is_world_process_zero": true,<br>
"max_steps": 2680,<br>
"num_train_epochs": 10,<br>
"total_flos": 7113018526540800.0,<br>
"trial_name": null,<br>
"trial_params": null<br>
<table class="table table-bordered table-hover table-condensed" style="width: 60%; overflow: auto">
<thead><tr><th title="Field #1">epoch</th>
<th title="Field #2">eval_loss</th>
<th title="Field #3">eval_matthews_correlation</th>
<th title="Field #4">eval_runtime</th>
<th title="Field #5">eval_samples_per_second</th>
<th title="Field #6">eval_steps_per_second</th>
<th title="Field #7">step</th>
<th title="Field #8">learning_rate</th>
<th title="Field #9">loss</th>
</tr></thead>
<tbody><tr>
<td align="left">1</td>
<td align="left">0.4796141982078552</td>
<td align="left">0.5351033849356494</td>
<td align="left">8.8067</td>
<td align="left">118.433</td>
<td align="left">14.875</td>
<td align="left">268</td>
<td align="left">0.000018067415730337083</td>
<td align="left">0.4913</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">0.5334435701370239</td>
<td align="left">0.5178799252679331</td>
<td align="left">8.9439</td>
<td align="left">116.616</td>
<td align="left">14.647</td>
<td align="left">536</td>
<td align="left">0.00001605992509363296</td>
<td align="left">0.2872</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">0.5544090270996094</td>
<td align="left">0.5649788851042796</td>
<td align="left">8.9467</td>
<td align="left">116.58</td>
<td align="left">14.642</td>
<td align="left">804</td>
<td align="left">0.000014052434456928841</td>
<td align="left">0.1777</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">0.5754779577255249</td>
<td align="left">0.6105374636148787</td>
<td align="left">8.8982</td>
<td align="left">117.215</td>
<td align="left">14.722</td>
<td align="left">1072</td>
<td align="left">0.000012044943820224718</td>
<td align="left">0.1263</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">0.7263916730880737</td>
<td align="left">0.5807606001872874</td>
<td align="left">8.9705</td>
<td align="left">116.27</td>
<td align="left">14.603</td>
<td align="left">1340</td>
<td align="left">0.000010037453183520601</td>
<td align="left">0.0905</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">0.8121512532234192</td>
<td align="left">0.5651092792103851</td>
<td align="left">8.9924</td>
<td align="left">115.987</td>
<td align="left">14.568</td>
<td align="left">1608</td>
<td align="left">0.00000802996254681648</td>
<td align="left">0.0692</td>
</tr>
<tr>
<td align="left">7</td>
<td align="left">0.941014289855957</td>
<td align="left">0.5632084517291658</td>
<td align="left">8.9583</td>
<td align="left">116.428</td>
<td align="left">14.623</td>
<td align="left">1876</td>
<td align="left">0.000006022471910112359</td>
<td align="left">0.0413</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">1.0095174312591553</td>
<td align="left">0.5856531698367675</td>
<td align="left">9.0029</td>
<td align="left">115.851</td>
<td align="left">14.551</td>
<td align="left">2144</td>
<td align="left">0.00000401498127340824</td>
<td align="left">0.0327</td>
</tr>
<tr>
<td align="left">9</td>
<td align="left">1.0425965785980225</td>
<td align="left">0.5941395545037332</td>
<td align="left">8.9217</td>
<td align="left">116.906</td>
<td align="left">14.683</td>
<td align="left">2412</td>
<td align="left">0.00000200749063670412</td>
<td align="left">0.0202</td>
</tr>
<tr>
<td align="left">10</td>
<td align="left">1.0782166719436646</td>
<td align="left">0.5956649094312695</td>
<td align="left">8.9472</td>
<td align="left">116.572</td>
<td align="left">14.641</td>
<td align="left">2680</td>
<td align="left">0</td>
<td align="left">0.0104</td>
</tr>
</tbody></table> |
Tejas3/distillbert_110_uncased_movie_genre | 5ddba8772058c97d54cdbd3d1b246cc7babdaa02 | 2021-08-25T22:17:24.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Tejas3 | null | Tejas3/distillbert_110_uncased_movie_genre | 8 | null | transformers | 12,986 | Entry not found |
TransQuest/monotransquest-da-ne_en-wiki | af687a9b0413a2d0b67a815e8571b5a25cf112bf | 2021-06-03T19:07:55.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ne-en",
"transformers",
"Quality Estimation",
"monotransquest",
"DA",
"license:apache-2.0"
]
| text-classification | false | TransQuest | null | TransQuest/monotransquest-da-ne_en-wiki | 8 | null | transformers | 12,987 | ---
language: ne-en
tags:
- Quality Estimation
- monotransquest
- DA
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-ne_en-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
TransQuest/siamesetransquest-da-et_en-wiki | c566f103d99d5c9b2157542c387aef841cc37e6c | 2021-07-23T08:31:12.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"et-en",
"transformers",
"Quality Estimation",
"siamesetransquest",
"da",
"license:apache-2.0"
]
| feature-extraction | false | TransQuest | null | TransQuest/siamesetransquest-da-et_en-wiki | 8 | null | transformers | 12,988 | ---
language: et-en
tags:
- Quality Estimation
- siamesetransquest
- da
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel
model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-et_en-wiki")
predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
Vasanth/tamil-sentiment-distilbert | 7016d9d2512a93c7042d8d8e5a49ff9357d1ff58 | 2021-08-23T17:16:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:tamilmixsentiment",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | Vasanth | null | Vasanth/tamil-sentiment-distilbert | 8 | 1 | transformers | 12,989 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tamilmixsentiment
metrics:
- accuracy
model_index:
- name: tamil-sentiment-distilbert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tamilmixsentiment
type: tamilmixsentiment
args: default
metric:
name: Accuracy
type: accuracy
value: 0.665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamil-sentiment-distilbert
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tamilmixsentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0230
- Accuracy: 0.665
## Dataset Information
- text: Tamil-English code-mixed comment.
- label: list of the possible sentiments
- LABEL_0: "Positive",
- LABEL_1: "Negative",
- LABEL_2: "Mixed_feelings",
- LABEL_3: "unknown_state",
- LABEL_4: "not-Tamil"
## Intended uses & limitations
This model was just created for doing classification task on tamilmixsentiment dataset
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0442 | 1.0 | 250 | 0.9883 | 0.674 |
| 0.9227 | 2.0 | 500 | 0.9782 | 0.673 |
| 0.7591 | 3.0 | 750 | 1.0230 | 0.665 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
WENGSYX/Multilingual_SimCSE | df8cd37bdabd061c48484aff01915c2634273897 | 2022-02-10T12:25:07.000Z | [
"pytorch",
"deberta-v2",
"feature-extraction",
"transformers"
]
| feature-extraction | false | WENGSYX | null | WENGSYX/Multilingual_SimCSE | 8 | null | transformers | 12,990 | # Multilingual SimCSE
#### A contrastive learning model using parallel language pair training
##### By using parallel sentence pairs in different languages, the text is mapped to the same vector space for pre-training similar to Simcse
##### Firstly, the [mDeBERTa](https://huggingface.co/microsoft/mdeberta-v3-base) model is used to load the pre-training parameters, and then the pre-training is carried out based on the [CCMatrix](https://github.com/facebookresearch/LASER/tree/main/tasks/CCMatrix) data set.
##### Training data: 100 million parallel pairs
##### Taining equipment: 4 * 3090
## Pipline Code
```
from transformers import AutoModel,AutoTokenizer
model = AutoModel.from_pretrained('WENGSYX/Multilingual_SimCSE')
tokenizer = AutoTokenizer.from_pretrained('WENGSYX/Multilingual_SimCSE')
word1 = tokenizer('Hello,world.',return_tensors='pt')
word2 = tokenizer('你好,世界',return_tensors='pt')
out1 = model(**word1).last_hidden_state.mean(1)
out2 = model(**word2).last_hidden_state.mean(1)
print(F.cosine_similarity(out1,out2))
----------------------------------------------------
tensor([0.8758], grad_fn=<DivBackward0>)
```
## Train Code
```
from transformers import AutoModel,AutoTokenizer,AdamW
model = AutoModel.from_pretrained('WENGSYX/Multilingual_SimCSE')
tokenizer = AutoTokenizer.from_pretrained('WENGSYX/Multilingual_SimCSE')
optimizer = AdamW(model.parameters(),lr=1e-5)
def compute_loss(y_pred, t=0.05, device="cuda"):
idxs = torch.arange(0, y_pred.shape[0], device=device)
y_true = idxs + 1 - idxs % 2 * 2
similarities = F.cosine_similarity(y_pred.unsqueeze(1), y_pred.unsqueeze(0), dim=2)
similarities = similarities - torch.eye(y_pred.shape[0], device=device) * 1e12
similarities = similarities / t
loss = F.cross_entropy(similarities, y_true)
return torch.mean(loss)
wordlist = [['Hello,world','你好,世界'],['Pensa che il bianco rappresenti la purezza.','Он думает, что белые символизируют чистоту.']]
input_ids, attention_mask, token_type_ids = [], [], []
for x in wordlist:
text1 = tokenizer(x[0], padding='max_length', truncation=True, max_length=512)
input_ids.append(text1['input_ids'])
attention_mask.append(text1['attention_mask'])
text2 = tokenizer(x[1], padding='max_length', truncation=True, max_length=512)
input_ids.append(text2['input_ids'])
attention_mask.append(text2['attention_mask'])
input_ids = torch.tensor(input_ids,device=device)
attention_mask = torch.tensor(attention_mask,device=device)
output = model(input_ids=input_ids,attention_mask=attention_mask)
output = output.last_hidden_state.mean(1)
loss = compute_loss(output)
loss.backward()
optimizer.step()
optimizer.zero_grad()
```
|
Wataru/T5-base-ja-open2ch-dialogue | f462e5a6810b0ff96165e30702a5c0d62cc8920d | 2021-07-22T15:52:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Wataru | null | Wataru/T5-base-ja-open2ch-dialogue | 8 | null | transformers | 12,991 | Entry not found |
Wiirin/DistilBERT-finetuned-PubMed-FoodCancer | 81dbc49eb57fa469704ecab223c397cb5fd1e2e5 | 2021-11-08T09:39:25.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Wiirin | null | Wiirin/DistilBERT-finetuned-PubMed-FoodCancer | 8 | null | transformers | 12,992 | Entry not found |
Wikidepia/indonesian-punctuation | 638025db378f952e70e51cb94481638c42bf67d2 | 2021-12-03T10:06:53.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Wikidepia | null | Wikidepia/indonesian-punctuation | 8 | null | transformers | 12,993 | Entry not found |
Wikidepia/wav2vec2-xls-r-300m-indonesian | 8a9d507f0804f8e5fca07b17214b2c0266ba7491 | 2022-03-23T18:26:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Wikidepia | null | Wikidepia/wav2vec2-xls-r-300m-indonesian | 8 | null | transformers | 12,994 | ---
language:
- id
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- id
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: XLS-R-300M - Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: id
metrics:
- name: Test WER
type: wer
value: 5.046
- name: Test CER
type: cer
value: 1.699
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: id
metrics:
- name: Test WER
type: wer
value: 41.31
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: id
metrics:
- name: Test WER
type: wer
value: 52.23
---
# Wav2Vec2 XLS-R-300M - Indonesian
This model is a fine-tuned version of `facebook/wav2vec2-xls-r-300m` on the `mozilla-foundation/common_voice_8_0` and [MagicHub Indonesian Conversational Speech Corpus](https://magichub.com/datasets/indonesian-conversational-speech-corpus/).
|
Xenova/sponsorblock-base-v1 | d1c8305152f46ac8b914294cb30bddd4ad778a59 | 2022-01-30T20:55:35.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Xenova | null | Xenova/sponsorblock-base-v1 | 8 | 1 | transformers | 12,995 | Entry not found |
aXhyra/presentation_irony_31415 | 33c333e05f2ef01f46c7d27f6be4d98220eee15a | 2021-12-15T10:14:53.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_irony_31415 | 8 | null | transformers | 12,996 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_irony_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6753923142373446
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_irony_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9694
- F1: 0.6754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.1637764704815665e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6601 | 1.0 | 90 | 0.6298 | 0.6230 |
| 0.4887 | 2.0 | 180 | 0.6039 | 0.6816 |
| 0.2543 | 3.0 | 270 | 0.7362 | 0.6803 |
| 0.1472 | 4.0 | 360 | 0.9694 | 0.6754 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_irony_42 | 7be6c13c9b9733995e28eff7aae553be39944e7b | 2021-12-15T10:10:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_irony_42 | 8 | null | transformers | 12,997 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_irony_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6745358521762839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_irony_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9344
- F1: 0.6745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.1637764704815665e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6675 | 1.0 | 90 | 0.5988 | 0.6684 |
| 0.5872 | 2.0 | 180 | 0.6039 | 0.6742 |
| 0.3953 | 3.0 | 270 | 0.8549 | 0.6557 |
| 0.0355 | 4.0 | 360 | 0.9344 | 0.6745 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aapot/wav2vec2-large-xlsr-53-finnish | c5c998277903efc984e20d1b52738b05be6e740e | 2022-03-28T17:56:36.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | aapot | null | aapot/wav2vec2-large-xlsr-53-finnish | 8 | 0 | transformers | 12,998 | ---
language: fi
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Finnish by Aapo Tanskanen
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 32.378771
---
# NOTE: this is an old model and should not be used anymore!! There are a lot better newer models available at our orgnization hub: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) and [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10 Finnish](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\...\…\–\é]'
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.378771 %
## Training
The Common Voice `train`, `validation` and `other` datasets were used for training as well as `CSS10 Finnish` and `Finnish parliament session 2` datasets.
The script used for training can be found from [Google Colab](https://colab.research.google.com/drive/1vnEGC9BnNRmVyIHj-0UsVulh_cUYSGWA?usp=sharing) |
aapot/wav2vec2-xlsr-1b-finnish-lm-v2 | 192fd9f4ff5e9de4a2681a47c30239544bffd214 | 2022-03-28T17:26:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"transformers",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | aapot | null | aapot/wav2vec2-xlsr-1b-finnish-lm-v2 | 8 | 1 | transformers | 12,999 | ---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 4.09
- name: Test CER
type: cer
value: 0.88
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.