modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-fi-mt | 50a13920f9ee99bbf35be1d6bd0cd87eb5df9c5c | 2021-09-09T21:49:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"mt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-mt | 9 | null | transformers | 12,100 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-mt
* source languages: fi
* target languages: mt
* OPUS readme: [fi-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-mt/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mt/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mt/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.mt | 29.9 | 0.490 |
|
Helsinki-NLP/opus-mt-fi-nso | 33381477145dbeb38f8c51320b434d106e42ec6f | 2021-09-09T21:49:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"nso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-nso | 9 | null | transformers | 12,101 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-nso
* source languages: fi
* target languages: nso
* OPUS readme: [fi-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-nso/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nso/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nso/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.nso | 35.8 | 0.564 |
|
Helsinki-NLP/opus-mt-fi-swc | 25808885bb6a048cbf1a0bdc70dc65bfef4e8d6d | 2021-09-09T21:51:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"swc",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-swc | 9 | null | transformers | 12,102 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-swc
* source languages: fi
* target languages: swc
* OPUS readme: [fi-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-swc/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-swc/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-swc/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.swc | 27.5 | 0.515 |
|
Helsinki-NLP/opus-mt-fj-fr | a4ed3c5f4b777029e38f903ddf74f545e8414b82 | 2021-09-09T21:52:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fj",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fj-fr | 9 | null | transformers | 12,103 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fj-fr
* source languages: fj
* target languages: fr
* OPUS readme: [fj-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fj-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fj.fr | 24.0 | 0.407 |
|
Helsinki-NLP/opus-mt-fr-mt | a2e29216c2370e58c6da9a7408f0b0baca02181c | 2021-09-09T21:55:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"mt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-mt | 9 | null | transformers | 12,104 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-mt
* source languages: fr
* target languages: mt
* OPUS readme: [fr-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mt | 28.7 | 0.466 |
|
Helsinki-NLP/opus-mt-fr-srn | beac2414acee0c0d1b2f6527735749a04627d612 | 2021-09-09T21:56:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"srn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-srn | 9 | null | transformers | 12,105 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-srn
* source languages: fr
* target languages: srn
* OPUS readme: [fr-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.srn | 27.4 | 0.459 |
|
Helsinki-NLP/opus-mt-fr-ty | 6daaa2a2c18acc2156e63b6c759f76a951f4d4e4 | 2021-09-09T21:57:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ty",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ty | 9 | null | transformers | 12,106 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ty
* source languages: fr
* target languages: ty
* OPUS readme: [fr-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ty | 39.6 | 0.561 |
|
Helsinki-NLP/opus-mt-fr-yap | 62973627846854e69452cff56abe3f2cf97fe341 | 2021-09-09T21:58:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"yap",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-yap | 9 | null | transformers | 12,107 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-yap
* source languages: fr
* target languages: yap
* OPUS readme: [fr-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yap | 25.8 | 0.434 |
|
Helsinki-NLP/opus-mt-fr-yo | fb5403e5a10c1be60e81c15311e110e11ae2e127 | 2021-09-09T21:58:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"yo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-yo | 9 | null | transformers | 12,108 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-yo
* source languages: fr
* target languages: yo
* OPUS readme: [fr-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yo | 25.9 | 0.415 |
|
Helsinki-NLP/opus-mt-gaa-fr | ff85d1cef66d2c3356ad45722438e78704f93bc9 | 2021-09-09T21:58:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gaa",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gaa-fr | 9 | null | transformers | 12,109 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gaa-fr
* source languages: gaa
* target languages: fr
* OPUS readme: [gaa-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fr | 27.8 | 0.455 |
|
Helsinki-NLP/opus-mt-gil-fi | 2af28c83bbe0a8c52a70c985859c22a5748fe870 | 2021-09-09T21:59:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gil",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gil-fi | 9 | null | transformers | 12,110 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gil-fi
* source languages: gil
* target languages: fi
* OPUS readme: [gil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fi | 23.1 | 0.447 |
|
Helsinki-NLP/opus-mt-gl-pt | 405800cc336304df910c14565697e2c3aa8622df | 2021-01-18T08:52:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gl",
"pt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gl-pt | 9 | null | transformers | 12,111 | ---
language:
- gl
- pt
tags:
- translation
license: apache-2.0
---
### glg-por
* source group: Galician
* target group: Portuguese
* OPUS readme: [glg-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.por | 57.9 | 0.758 |
### System Info:
- hf_name: glg-por
- source_languages: glg
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'pt']
- src_constituents: {'glg'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: por
- short_pair: gl-pt
- chrF2_score: 0.758
- bleu: 57.9
- brevity_penalty: 0.977
- ref_len: 3078.0
- src_name: Galician
- tgt_name: Portuguese
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: pt
- prefer_old: False
- long_pair: glg-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gmw-en | 14eecd0cc660fdc4319eb82129f6a5873c56bf1b | 2021-01-18T08:53:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gmw-en | 9 | null | transformers | 12,112 | ---
language:
- nl
- en
- lb
- af
- de
- fy
- yi
- gmw
tags:
- translation
license: apache-2.0
---
### gmw-eng
* source group: West Germanic languages
* target group: English
* OPUS readme: [gmw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.538 |
| news-test2008-deueng.deu.eng | 25.7 | 0.534 |
| newstest2009-deueng.deu.eng | 25.1 | 0.530 |
| newstest2010-deueng.deu.eng | 27.9 | 0.565 |
| newstest2011-deueng.deu.eng | 25.3 | 0.539 |
| newstest2012-deueng.deu.eng | 26.6 | 0.548 |
| newstest2013-deueng.deu.eng | 29.6 | 0.565 |
| newstest2014-deen-deueng.deu.eng | 30.2 | 0.571 |
| newstest2015-ende-deueng.deu.eng | 31.5 | 0.577 |
| newstest2016-ende-deueng.deu.eng | 36.7 | 0.622 |
| newstest2017-ende-deueng.deu.eng | 32.3 | 0.585 |
| newstest2018-ende-deueng.deu.eng | 39.9 | 0.638 |
| newstest2019-deen-deueng.deu.eng | 35.9 | 0.611 |
| Tatoeba-test.afr-eng.afr.eng | 61.8 | 0.750 |
| Tatoeba-test.ang-eng.ang.eng | 7.3 | 0.220 |
| Tatoeba-test.deu-eng.deu.eng | 48.3 | 0.657 |
| Tatoeba-test.enm-eng.enm.eng | 16.1 | 0.423 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.168 |
| Tatoeba-test.fry-eng.fry.eng | 28.6 | 0.488 |
| Tatoeba-test.gos-eng.gos.eng | 15.5 | 0.326 |
| Tatoeba-test.gsw-eng.gsw.eng | 12.7 | 0.308 |
| Tatoeba-test.ksh-eng.ksh.eng | 8.4 | 0.254 |
| Tatoeba-test.ltz-eng.ltz.eng | 28.7 | 0.453 |
| Tatoeba-test.multi.eng | 48.5 | 0.646 |
| Tatoeba-test.nds-eng.nds.eng | 31.4 | 0.509 |
| Tatoeba-test.nld-eng.nld.eng | 58.1 | 0.728 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.1 | 0.406 |
| Tatoeba-test.sco-eng.sco.eng | 40.8 | 0.570 |
| Tatoeba-test.stq-eng.stq.eng | 20.3 | 0.380 |
| Tatoeba-test.swg-eng.swg.eng | 20.5 | 0.315 |
| Tatoeba-test.yid-eng.yid.eng | 16.0 | 0.366 |
### System Info:
- hf_name: gmw-eng
- source_languages: gmw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gmw
- tgt_alpha3: eng
- short_pair: gmw-en
- chrF2_score: 0.6459999999999999
- bleu: 48.5
- brevity_penalty: 0.997
- ref_len: 72584.0
- src_name: West Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gmw
- tgt_alpha2: en
- prefer_old: False
- long_pair: gmw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-is-fi | e943e0b23c71ff6abf4da89ba878cc486cec5bfa | 2021-09-09T22:12:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"is",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-is-fi | 9 | null | transformers | 12,113 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-is-fi
* source languages: is
* target languages: fi
* OPUS readme: [is-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.is.fi | 25.0 | 0.489 |
|
Helsinki-NLP/opus-mt-is-sv | 22c505f87c5484b3e73e042937087d2de434a223 | 2021-09-09T22:12:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"is",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-is-sv | 9 | null | transformers | 12,114 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-is-sv
* source languages: is
* target languages: sv
* OPUS readme: [is-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.is.sv | 30.4 | 0.495 |
|
Helsinki-NLP/opus-mt-it-sv | ca009b276a527f4bfc8eb45bfee1a37f45b7b88f | 2021-09-10T13:53:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-sv | 9 | null | transformers | 12,115 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-it-sv
* source languages: it
* target languages: sv
* OPUS readme: [it-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-sv/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-sv/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-sv/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.it.sv | 56.0 | 0.707 |
|
Helsinki-NLP/opus-mt-ja-da | 74a908dc132b73b3e0e5f32e9362ca6593b242de | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"da",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-da | 9 | null | transformers | 12,116 | ---
language:
- ja
- da
tags:
- translation
license: apache-2.0
---
### jpn-dan
* source group: Japanese
* target group: Danish
* OPUS readme: [jpn-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-dan/README.md)
* model: transformer-align
* source language(s): jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.dan | 43.2 | 0.590 |
### System Info:
- hf_name: jpn-dan
- source_languages: jpn
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'da']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: dan
- short_pair: ja-da
- chrF2_score: 0.59
- bleu: 43.2
- brevity_penalty: 0.972
- ref_len: 5823.0
- src_name: Japanese
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: da
- prefer_old: False
- long_pair: jpn-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ko-hu | 129183c58bad505b70f9a82c41ac7eadfe481cac | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ko",
"hu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ko-hu | 9 | null | transformers | 12,117 | ---
language:
- ko
- hu
tags:
- translation
license: apache-2.0
---
### kor-hun
* source group: Korean
* target group: Hungarian
* OPUS readme: [kor-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-hun/README.md)
* model: transformer-align
* source language(s): kor kor_Hang kor_Latn
* target language(s): hun
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kor.hun | 28.6 | 0.520 |
### System Info:
- hf_name: kor-hun
- source_languages: kor
- target_languages: hun
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-hun/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ko', 'hu']
- src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'}
- tgt_constituents: {'hun'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.test.txt
- src_alpha3: kor
- tgt_alpha3: hun
- short_pair: ko-hu
- chrF2_score: 0.52
- bleu: 28.6
- brevity_penalty: 0.905
- ref_len: 1615.0
- src_name: Korean
- tgt_name: Hungarian
- train_date: 2020-06-17
- src_alpha2: ko
- tgt_alpha2: hu
- prefer_old: False
- long_pair: kor-hun
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-lua-fr | 4b48d4ff9cf51fc73c09c3f51088fa8fc877a1bd | 2021-09-10T13:56:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lua",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lua-fr | 9 | null | transformers | 12,118 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-fr
* source languages: lua
* target languages: fr
* OPUS readme: [lua-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.fr | 25.7 | 0.429 |
|
Helsinki-NLP/opus-mt-lus-fi | 15f7c3660bdd6db1a5bec2a4ef68f34e108ab4b7 | 2021-09-10T13:56:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lus",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lus-fi | 9 | null | transformers | 12,119 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lus-fi
* source languages: lus
* target languages: fi
* OPUS readme: [lus-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.fi | 22.6 | 0.441 |
|
Helsinki-NLP/opus-mt-no-ru | 9403f7e76db6ad7b121e5ee9c2ab375bca4b334d | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"no",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-no-ru | 9 | null | transformers | 12,120 | ---
language:
- no
- ru
tags:
- translation
license: apache-2.0
---
### nor-rus
* source group: Norwegian
* target group: Russian
* OPUS readme: [nor-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.rus | 18.6 | 0.400 |
### System Info:
- hf_name: nor-rus
- source_languages: nor
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'ru']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: rus
- short_pair: no-ru
- chrF2_score: 0.4
- bleu: 18.6
- brevity_penalty: 0.958
- ref_len: 10671.0
- src_name: Norwegian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: ru
- prefer_old: False
- long_pair: nor-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ny-es | ecf215cfc68db14727c7d6eacfd0bd71b43e419f | 2021-09-10T13:59:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ny",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ny-es | 9 | null | transformers | 12,121 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ny-es
* source languages: ny
* target languages: es
* OPUS readme: [ny-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.es | 27.9 | 0.457 |
|
Helsinki-NLP/opus-mt-pap-es | c0231a7b6778c6d7c97ef255234220a442591c55 | 2021-09-10T14:00:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pap",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pap-es | 9 | null | transformers | 12,122 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pap-es
* source languages: pap
* target languages: es
* OPUS readme: [pap-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.es | 32.3 | 0.518 |
|
Helsinki-NLP/opus-mt-pis-fr | 4b3761ce6333a6acd45276540dbcd6f73d0599a0 | 2021-09-10T14:01:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pis",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pis-fr | 9 | null | transformers | 12,123 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pis-fr
* source languages: pis
* target languages: fr
* OPUS readme: [pis-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fr | 24.9 | 0.421 |
|
Helsinki-NLP/opus-mt-pl-no | 180f6730794a2ba689d997d179ca7fbef883ccbf | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pl",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pl-no | 9 | null | transformers | 12,124 | ---
language:
- pl
- no
tags:
- translation
license: apache-2.0
---
### pol-nor
* source group: Polish
* target group: Norwegian
* OPUS readme: [pol-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md)
* model: transformer-align
* source language(s): pol
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.nor | 27.5 | 0.479 |
### System Info:
- hf_name: pol-nor
- source_languages: pol
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'no']
- src_constituents: {'pol'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt
- src_alpha3: pol
- tgt_alpha3: nor
- short_pair: pl-no
- chrF2_score: 0.479
- bleu: 27.5
- brevity_penalty: 0.9690000000000001
- ref_len: 2045.0
- src_name: Polish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: pl
- tgt_alpha2: no
- prefer_old: False
- long_pair: pol-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-pon-fi | cfe8cd8c84d0f509a22eeecf9f4f3b4e068518ad | 2021-09-10T14:01:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pon",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pon-fi | 9 | null | transformers | 12,125 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pon-fi
* source languages: pon
* target languages: fi
* OPUS readme: [pon-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.fi | 22.2 | 0.434 |
|
Helsinki-NLP/opus-mt-prl-es | 0491a81cbf737a9446ff3a836f96364c756f06fa | 2021-09-10T14:01:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"prl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-prl-es | 9 | null | transformers | 12,126 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-prl-es
* source languages: prl
* target languages: es
* OPUS readme: [prl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/prl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.prl.es | 93.3 | 0.955 |
|
Helsinki-NLP/opus-mt-ru-da | 1b3671c92a4aeb5538b60460f52aa1bfb4be4c5c | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"da",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-da | 9 | null | transformers | 12,127 | ---
language:
- ru
- da
tags:
- translation
license: apache-2.0
---
### rus-dan
* source group: Russian
* target group: Danish
* OPUS readme: [rus-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.dan | 56.6 | 0.714 |
### System Info:
- hf_name: rus-dan
- source_languages: rus
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'da']
- src_constituents: {'rus'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: dan
- short_pair: ru-da
- chrF2_score: 0.7140000000000001
- bleu: 56.6
- brevity_penalty: 0.977
- ref_len: 11746.0
- src_name: Russian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: da
- prefer_old: False
- long_pair: rus-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-st-sv | 31d949849d50a18eec58c89d0aac3707e4822508 | 2021-09-10T14:05:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"st",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-st-sv | 9 | null | transformers | 12,128 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-st-sv
* source languages: st
* target languages: sv
* OPUS readme: [st-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.sv | 33.5 | 0.523 |
|
Helsinki-NLP/opus-mt-sv-efi | 5f27be86bf1089971ddb8f6217b12b04370089c6 | 2021-09-10T14:06:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"efi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-efi | 9 | null | transformers | 12,129 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-efi
* source languages: sv
* target languages: efi
* OPUS readme: [sv-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-efi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-efi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-efi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.efi | 29.4 | 0.502 |
|
Helsinki-NLP/opus-mt-sv-niu | ca139393b4eb9d748acd203b7103fe13786fb76d | 2021-09-10T14:08:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"niu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-niu | 9 | null | transformers | 12,130 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-niu
* source languages: sv
* target languages: niu
* OPUS readme: [sv-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-niu/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-niu/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-niu/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.niu | 37.0 | 0.575 |
|
Helsinki-NLP/opus-mt-sv-ty | 14528339a50e372a8e58390e830af7bc076c572a | 2021-09-10T14:10:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"ty",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-ty | 9 | null | transformers | 12,131 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ty
* source languages: sv
* target languages: ty
* OPUS readme: [sv-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ty/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ty/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ty/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ty | 40.5 | 0.571 |
|
Helsinki-NLP/opus-mt-tiv-fr | 8e0eeddd684b98a6835c4cb18f2b64ebc1b0339f | 2021-09-11T10:48:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tiv",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tiv-fr | 9 | null | transformers | 12,132 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tiv-fr
* source languages: tiv
* target languages: fr
* OPUS readme: [tiv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tiv.fr | 22.3 | 0.389 |
|
Helsinki-NLP/opus-mt-tr-sv | dd61e4019527973eaf89dea2303366eab6eaeea8 | 2021-09-11T10:49:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tr",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tr-sv | 9 | null | transformers | 12,133 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tr-sv
* source languages: tr
* target languages: sv
* OPUS readme: [tr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tr.sv | 26.3 | 0.478 |
|
Helsinki-NLP/opus-mt-ts-fi | 186ae66ee908eb549d6d2fc111d62209c4d0c992 | 2021-09-11T10:49:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ts",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ts-fi | 9 | null | transformers | 12,134 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ts-fi
* source languages: ts
* target languages: fi
* OPUS readme: [ts-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.fi | 27.7 | 0.509 |
|
Helsinki-NLP/opus-mt-tw-es | 2b92519627890a2dfa9f6288d9b8986e05270e21 | 2021-09-11T10:50:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tw",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tw-es | 9 | null | transformers | 12,135 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tw-es
* source languages: tw
* target languages: es
* OPUS readme: [tw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.es | 25.9 | 0.441 |
|
Helsinki-NLP/opus-mt-uk-de | 695f511c49ef134d2194c9f115546b6c273fb994 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-de | 9 | null | transformers | 12,136 | ---
language:
- uk
- de
tags:
- translation
license: apache-2.0
---
### ukr-deu
* source group: Ukrainian
* target group: German
* OPUS readme: [ukr-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-deu/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.deu | 48.2 | 0.661 |
### System Info:
- hf_name: ukr-deu
- source_languages: ukr
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'de']
- src_constituents: {'ukr'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: deu
- short_pair: uk-de
- chrF2_score: 0.6609999999999999
- bleu: 48.2
- brevity_penalty: 0.98
- ref_len: 62298.0
- src_name: Ukrainian
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: de
- prefer_old: False
- long_pair: ukr-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-uk-he | ff01cbe0f11d2f009bf34236b9fe58d9f1c66091 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-he | 9 | null | transformers | 12,137 | ---
language:
- uk
- he
tags:
- translation
license: apache-2.0
---
### ukr-heb
* source group: Ukrainian
* target group: Hebrew
* OPUS readme: [ukr-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.heb | 35.7 | 0.557 |
### System Info:
- hf_name: ukr-heb
- source_languages: ukr
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'he']
- src_constituents: {'ukr'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: heb
- short_pair: uk-he
- chrF2_score: 0.557
- bleu: 35.7
- brevity_penalty: 1.0
- ref_len: 4765.0
- src_name: Ukrainian
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: he
- prefer_old: False
- long_pair: ukr-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-uk-hu | 17f3e9461db05569f8160c3a1e14da7a3a81c84e | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"hu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-hu | 9 | null | transformers | 12,138 | ---
language:
- uk
- hu
tags:
- translation
license: apache-2.0
---
### ukr-hun
* source group: Ukrainian
* target group: Hungarian
* OPUS readme: [ukr-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): hun
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.hun | 41.4 | 0.649 |
### System Info:
- hf_name: ukr-hun
- source_languages: ukr
- target_languages: hun
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'hu']
- src_constituents: {'ukr'}
- tgt_constituents: {'hun'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: hun
- short_pair: uk-hu
- chrF2_score: 0.649
- bleu: 41.4
- brevity_penalty: 0.9740000000000001
- ref_len: 2433.0
- src_name: Ukrainian
- tgt_name: Hungarian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: hu
- prefer_old: False
- long_pair: ukr-hun
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-wal-en | 2f06019086b97ed0d8768b978ab1d61adac0fc4d | 2021-09-11T10:51:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"wal",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-wal-en | 9 | null | transformers | 12,139 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-wal-en
* source languages: wal
* target languages: en
* OPUS readme: [wal-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wal-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wal.en | 22.5 | 0.386 |
|
Helsinki-NLP/opus-mt-yap-fr | 0a8a2f122d3d0263db62011b8394dbe45d3eb734 | 2021-09-11T10:52:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"yap",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-yap-fr | 9 | null | transformers | 12,140 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yap-fr
* source languages: yap
* target languages: fr
* OPUS readme: [yap-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.fr | 22.2 | 0.381 |
|
Herais/pred_timeperiod | fb5548d8db24c2d413c85bed919f533f6bddcfc0 | 2022-02-27T05:52:58.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:Custom",
"transformers",
"classification",
"license:apache-2.0"
]
| text-classification | false | Herais | null | Herais/pred_timeperiod | 9 | null | transformers | 12,141 | ---
language:
- zh
tags:
- classification
license: apache-2.0
datasets:
- Custom
metrics:
- rouge
---
This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_timeperiod"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}
id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['当代']
Citation
{} |
HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.2 | 77568c39b315b946014dd3ecca27edad644b01f8 | 2021-11-16T20:44:17.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | HungChau | null | HungChau/distilbert-base-cased-concept-extraction-wikipedia-v1.2 | 9 | null | transformers | 12,142 | Entry not found |
Intel/bert-base-uncased-sparse-85-unstructured-pruneofa | 2623863d5568c78583ff87da978a8768ff1525e9 | 2022-01-13T12:12:27.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"transformers",
"fill-mask"
]
| fill-mask | false | Intel | null | Intel/bert-base-uncased-sparse-85-unstructured-pruneofa | 9 | null | transformers | 12,143 | ---
language: en
tags: fill-mask
datasets:
- wikipedia
- bookcorpus
---
# 85% Sparse BERT-Large (uncased) Prune OFA
This model is a result from our paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all). |
Irina/Fairytale | e770e5b53d0250a7b45e1eb6373efe653c4226b7 | 2021-12-22T22:21:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Irina | null | Irina/Fairytale | 9 | null | transformers | 12,144 | Entry not found |
ItcastAI/bert_cn_finetuning | d7643297cce2a1e518c40480ff1fbf306a758c65 | 2021-05-18T21:10:29.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ItcastAI | null | ItcastAI/bert_cn_finetuning | 9 | null | transformers | 12,145 | Entry not found |
Jodsa/camembert_clf | 98baf3da92e362047658c32e1892ccac953ca7c7 | 2021-05-18T14:29:37.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
]
| text-classification | false | Jodsa | null | Jodsa/camembert_clf | 9 | null | transformers | 12,146 | Entry not found |
JorisCos/ConvTasNet_Libri3Mix_sepclean_16k | 4d2ee438cee8cc31708770028ab2332287da4f01 | 2021-09-23T15:49:03.000Z | [
"pytorch",
"dataset:Libri3Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
]
| audio-to-audio | false | JorisCos | null | JorisCos/ConvTasNet_Libri3Mix_sepclean_16k | 9 | null | asteroid | 12,147 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri3Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yaml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.932601610824145
si_sdr_imp: 12.299341066588594
sdr: 9.557260814240447
sdr_imp: 12.76957128385349
sir: 17.387646884037455
sir_imp: 20.599955591768484
sar: 10.686885056960504
sar_imp: -55.8894643263213
stoi: 0.8481258332025354
stoi_imp: 0.25528367853750356
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
JorisCos/DCUNet_Libri1Mix_enhsingle_16k | 3fa701427576e01e835ae415c8ed7516874b08dd | 2021-09-23T15:49:15.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DCUNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
]
| audio-to-audio | false | JorisCos | null | JorisCos/DCUNet_Libri1Mix_enhsingle_16k | 9 | 1 | asteroid | 12,148 | ---
tags:
- asteroid
- audio
- DCUNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DCUNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_n_filters: 1024
stft_kernel_size: 1024
stft_stride: 256
masknet:
architecture: Large-DCUNet-20
fix_length_mode: pad
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.154035391645971
si_sdr_imp: 9.704254085786271
sdr: 13.568058873121435
sdr_imp: 10.065396073908367
sar: 13.568058873121435
sar_imp: 10.065396073908367
stoi: 0.9199373340235417
stoi_imp: 0.12401751048300132
```
License notice:
This work "DCUNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCUNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
JuliusAlphonso/dear-jarvis-monolith-xed-en | 6ff080bdc7929253477dc4d57b70faf21b88ab27 | 2021-06-22T09:48:03.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | JuliusAlphonso | null | JuliusAlphonso/dear-jarvis-monolith-xed-en | 9 | null | transformers | 12,149 | ## Model description
This model was trained on the XED dataset and achieved
validation loss: 0.5995
validation acc: 84.28% (ROC-AUC)
Labels are based on Plutchik's model of emotions and may be combined:

### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.8.0
- Tokenizers 0.10.3
|
Khanh/xlm-roberta-base-finetuned-viquad | 1df64444706632a660260c897e549a49f17a2416 | 2022-01-04T18:56:38.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Khanh | null | Khanh/xlm-roberta-base-finetuned-viquad | 9 | null | transformers | 12,150 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-viquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-viquad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 259 | 2.9945 |
| 3.3665 | 2.0 | 518 | 2.3761 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
KoichiYasuoka/roberta-base-japanese-aozora-char | 2f53454b2602c83b8418967cd3b6a7adc78267d4 | 2022-06-21T05:50:52.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-japanese-aozora-char | 9 | 1 | transformers | 12,151 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# roberta-base-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-base-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-aozora-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora-char")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
|
KoichiYasuoka/roberta-large-japanese-char-luw-upos | 7b8a887f17db3f2c74706615fcff795fb6b76fbf | 2022-06-26T23:00:37.000Z | [
"pytorch",
"roberta",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-large-japanese-char-luw-upos | 9 | null | transformers | 12,152 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# roberta-large-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_final | a095a27f0eb385bd9a4e0637cffdaf8bff85efd3 | 2022-02-08T04:27:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"transformers",
"Openslr Multilingual",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | LegolasTheElf | null | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_final | 9 | null | transformers | 12,153 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- Openslr Multilingual
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: Wav2Vec2_xls_r_300m_hi_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Lumos/ag_news1 | cf2fb8c2ebc3c2c6e74049995ec475df6300c74d | 2021-12-13T12:01:36.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Lumos | null | Lumos/ag_news1 | 9 | null | transformers | 12,154 | Entry not found |
M-FAC/bert-mini-finetuned-stsb | cd5b9155a80f634ffbcf5f801f8abce6df9634c8 | 2021-12-13T08:17:27.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
]
| text-classification | false | M-FAC | null | M-FAC/bert-mini-finetuned-stsb | 9 | null | transformers | 12,155 | # BERT-mini model finetuned with M-FAC
This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on STS-B validation set:
```bash
pearson = 85.03
spearman = 85.06
```
Mean and standard deviation for 5 runs on STS-B validation set:
| | Pearson | Spearman |
|:----:|:-----------:|:----------:|
| Adam | 82.09 ± 0.54 | 82.64 ± 0.71 |
| M-FAC | 84.66 ± 0.30 | 84.65 ± 0.30 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 7 \
--model_name_or_path prajjwal1/bert-mini \
--task_name stsb \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
MMG/bert-base-spanish-wwm-cased-finetuned-sqac | 15da909d0ea9e7859994f65e3adf1a8047ecd0e6 | 2021-12-01T06:13:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"es",
"dataset:sqac",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | MMG | null | MMG/bert-base-spanish-wwm-cased-finetuned-sqac | 9 | null | transformers | 12,156 | ---
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-sqac
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-sqac
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the sqac dataset.
It achieves the following results on the evaluation set:
{'exact_match': 62.017167, 'f1': 79.452767}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1335 | 1.0 | 1230 | 0.9346 |
| 0.6794 | 2.0 | 2460 | 0.8634 |
| 0.3992 | 3.0 | 3690 | 0.9662 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387 | 01077a07652e1e11395261c4a63a7f145e0a5fd5 | 2022-01-21T07:05:45.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:MadhurJindalWorkMail/autonlp-data-Gibb-Detect",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | MadhurJindalWorkMail | null | MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387 | 9 | null | transformers | 12,157 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- MadhurJindalWorkMail/autonlp-data-Gibb-Detect
co2_eq_emissions: 70.95647633212745
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 515314387
- CO2 Emissions (in grams): 70.95647633212745
## Validation Metrics
- Loss: 0.08077705651521683
- Accuracy: 0.9760103738923709
- Macro F1: 0.9728412857204902
- Micro F1: 0.9760103738923709
- Weighted F1: 0.9759907151741426
- Macro Precision: 0.9736622407675567
- Micro Precision: 0.9760103738923709
- Weighted Precision: 0.97673611876005
- Macro Recall: 0.9728978421381711
- Micro Recall: 0.9760103738923709
- Weighted Recall: 0.9760103738923709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Maha/OGBV-gender-bert-hi-en-hasoc20a-fin | 94fe6be6729b2e7bafc737410636c586a940b13c | 2022-02-23T03:56:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Maha | null | Maha/OGBV-gender-bert-hi-en-hasoc20a-fin | 9 | null | transformers | 12,158 | Entry not found |
MaryaAI/opus-mt-en-ro-finetuned-en-to-ro | e88a9e255c0ef5e6647f056d0050bda63a99aeac | 2021-09-05T08:42:06.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | MaryaAI | null | MaryaAI/opus-mt-en-ro-finetuned-en-to-ro | 9 | null | transformers | 12,159 | ---
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1599
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1599
- Gen Len: 34.1236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1599 | 34.1236 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Media1129/keyword-tag-model-6000-9-16_more_ingredient | f3209f2010ee9fa4d14c2295a47ba336301b6d7c | 2021-09-17T02:19:45.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-6000-9-16_more_ingredient | 9 | null | transformers | 12,160 | Entry not found |
Mihneo/romanian_bert_news | cbfd7241c69072388bc91d5f508c1bfd78613758 | 2021-05-18T20:33:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Mihneo | null | Mihneo/romanian_bert_news | 9 | null | transformers | 12,161 | |
Milos/slovak-gpt-j-405M | ce46daa785ff8ca71f8e9f9c6913c1fceb9f98a6 | 2022-02-18T13:46:50.000Z | [
"pytorch",
"gptj",
"text-generation",
"sk",
"arxiv:2104.09864",
"transformers",
"Slovak GPT-J",
"causal-lm",
"license:gpl-3.0"
]
| text-generation | false | Milos | null | Milos/slovak-gpt-j-405M | 9 | null | transformers | 12,162 | ---
language:
- sk
tags:
- Slovak GPT-J
- pytorch
- causal-lm
license: gpl-3.0
---
# Slovak GPT-J-405M
Slovak GPT-J-405M is the second model released in Slovak GPT-J series after its smaller variant [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M). Since then a larger [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B) was released.
## Model Description
Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 405M trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 405,677,136 |
| \\(n_{layers}\\) | 24 |
| \\(d_{model}\\) | 1024 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3†) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<p><strong>†</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure>
## Training data
Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model.
The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information.
## Training procedure
This model was trained for a bit more than 36.5 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.821`.
## Intended Use
Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt.
### How to use
This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-405M")
model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-405M")
```
When generating a prompt keep in mind these three things, and you should be good to go:
1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`.
2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`.
3. In case of a new line always enter `\n\n` instead of a single `\n`
To illustrate an example of a basic text generation:
```
>>> prompt = "Tradičné jedlo na Orave sú"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input)
>>> tokenizer.decode(output[0])
'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov'
```
### Capabilities, Limitations, and Biases
The capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content.
For example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines).
Try generating "How to" articles:
```
PROMPT
Ako napísať pôsobivú esej krok za krokom
OUTPUT
Ako napísať pôsobivú esej krok za krokom
V tomto článku sa dozviete, ako napísať esej, ktorá bude mať v sebe niečo, čo zaujme a bude sa vám páčiť.
V prvom rade si musíte uvedomiť, že esej je vlastne písaný text, ktorý má byť napísaný na počítači.'
```
However, relying on the model to produce factually correct information isn't recommended.
Or try to generate chat conversations:
```
PROMPT
Peter: Ako sa mas?
Lucia: Ale celkom dobre. Co si robil?
Peter:
OUTPUT
Peter: Ako sa mas?
Lucia: Ale celkom dobre. Co si robil?
Peter: No, bol som na chate.
Lucia: A co si tam robil?
Peter: No, bol som tam s kamošmi.
Lucia: A co si tam robil?
Peter: No, bol som tam s kamošmi.
```
Apparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see [generate's documentation](https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate) on how to introduce a frequency/repetition penalty.
Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:
```
>>> prompt = "Věta nesmí být sprostá a musí být zcela"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input, max_length=16)
>>> tokenizer.decode(output[0])
'Věta nesmí být sprostá a musí být zcela pravdivá.'
```
## Citation and Related Information
This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :)
If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile.
### BibTeX entry
To cite this model:
```bibtex
@misc{slovak-gpt-j-405m,
author = {Kondela, Milos},
title = {{Slovak GPT-J-405M}},
howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-405M}},
year = 2022,
month = February
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/). |
Monsia/camembert-fr-covid-tweet-classification | b86ce8a0c7dba1e95caf20af4db692cb3d499fab | 2021-10-29T15:17:47.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"classification",
"license:apache-2.0"
]
| text-classification | false | Monsia | null | Monsia/camembert-fr-covid-tweet-classification | 9 | null | transformers | 12,163 | ---
language:
- fr
tags:
- classification
license: apache-2.0
metrics:
- accuracy
widget:
- text: "tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les 'ont dit'..."
---
# camembert-fr-covid-tweet-classification
This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2.
This model reaches an accuracy of 66.00% on the dev set.
In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:
- chiffres : this means, the tweet talk about statistics of covid.
- mesures : this means, the tweet talk about measures take by government of covid
- opinions : this means, the tweet talk about opinion of people like fake new.
- symptomes : this means, the tweet talk about symptoms or variant of covid.
- divers : or other
# Pipelining the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-classification")
model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-classification")
nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer)
nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...")
# Output: [{'label': 'opinions', 'score': 0.831]
```
|
Mood/distilbert-base-uncased-finetuned-ner | aff9865337ff315110f791b7703b54348b160ffe | 2021-11-18T16:56:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Mood | null | Mood/distilbert-base-uncased-finetuned-ner | 9 | null | transformers | 12,164 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Muennighoff/SGPT-2.7B-weightedmean-nli-bitfit | 3f56086f795e8562fe8cb97178f23ed6fa453edb | 2022-06-18T13:11:04.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
]
| sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-2.7B-weightedmean-nli-bitfit | 9 | null | sentence-transformers | 12,165 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-2.7B-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 70456 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 7045,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 7046,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2560, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Mustang/BERT_responsible_AI | 9d96a6f94a07cbcbd695385a9a6a317a7128ba25 | 2022-01-26T13:44:12.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:eupl-1.1"
]
| text-classification | false | Mustang | null | Mustang/BERT_responsible_AI | 9 | null | transformers | 12,166 | ---
license: eupl-1.1
---
## BERT model van het project Explainable AI |
NDugar/2epochv3mlni | f77c36f81a093d658f3b30b8f5f7b5a4fefd1fdf | 2021-11-30T18:31:47.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
]
| zero-shot-classification | false | NDugar | null | NDugar/2epochv3mlni | 9 | null | transformers | 12,167 | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` |
Narshion/bert-base-multilingual-cased-urgency | 3ec2e67d7ab2503dcc14e5ccfc8fa4db42df070b | 2021-09-15T12:27:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Narshion | null | Narshion/bert-base-multilingual-cased-urgency | 9 | null | transformers | 12,168 | ---
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: bert-base-multilingual-cased-urgency
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-urgency
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/) on the mWACH NEO dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.1408 | 1.0 | 5659 | 3.6705 |
| 2.8777 | 2.0 | 11318 | 2.5536 |
| 2.561 | 3.0 | 16977 | 2.2740 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Nihwy/DialoSqui | 4d0dc1b82842c78d6e4301daaffae5046ea9d9f9 | 2022-01-23T19:46:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Nihwy | null | Nihwy/DialoSqui | 9 | null | transformers | 12,169 | ---
tags:
- conversational
---
# Squi |
Norod78/hebrew-gpt_neo-xl-poetry | 09a87f6351a2cf63c86e0c19ac2ea63387e15482 | 2022-07-04T07:26:28.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"he",
"transformers",
"license:mit"
]
| text-generation | false | Norod78 | null | Norod78/hebrew-gpt_neo-xl-poetry | 9 | 1 | transformers | 12,170 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "עוד בימי קדם"
- text: "תריסר מכשפות סג"
- text: "\n\nהאיש האחרון בעולם /"
- text: "פעם אחת, לפני שנים רבות"
- text: "הרמיוני הסתירה את"
- text: "לפתע, אור ירוק"
license: mit
---
# hebrew-gpt_neo-xl-poetry
Hebrew poetry text generation model which was fine tuned upon on [hebrew-gpt_neo-xl](https://huggingface.co/Norod78/hebrew-gpt_neo-xl).
## Datasets
An assortment of various Hebrew books, magazines and poetry corpuses
## Training Config
Similar to [this one](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR>
## Usage
### Google Colab Notebook
Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR>
#### Simple usage sample code
```python
!pip install tokenizers==0.10.3 transformers==4.8.0
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry")
model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry", pad_token_id=tokenizer.eos_token_id)
prompt_text = "אני אוהב שוקולד ועוגות"
max_len = 512
sample_output_num = 3
seed = 1000
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
print(f"device: {device}, n_gpu: {n_gpu}")
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
model.to(device)
encoded_prompt = tokenizer.encode(
prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
print("input_ids = " + str(input_ids))
if input_ids != None:
max_len += len(encoded_prompt[0])
if max_len > 2048:
max_len = 2048
print("Updated max_len = " + str(max_len))
stop_token = "<|endoftext|>"
new_lines = "\n\n\n"
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.95,
num_return_sequences=sample_output_num
)
print(100 * '-' + "\n\t\tOutput\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
text = tokenizer.decode(sample_output, skip_special_tokens=True)
# Remove all text after the stop token
text = text[: text.find(stop_token) if stop_token else None]
# Remove all text after 3 newlines
text = text[: text.find(new_lines) if new_lines else None]
print("\n{}: {}".format(i, text))
print("\n" + 100 * '-')
```
|
Osiris/emotion_classifier | 531104b7bfc271dc1a17d92ec7b9214b0984776f | 2021-11-26T07:57:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Osiris | null | Osiris/emotion_classifier | 9 | 1 | transformers | 12,171 | ### Introduction:
This model belongs to text-classification. You can determine the emotion behind a sentence.
### Label Explaination:
LABEL_0: Positive (have positive emotion)
LABEL_1: Negative (have negative emotion)
### Usage:
```python
>>> from transformers import pipeline
>>> ec = pipeline('text-classification', model='Osiris/emotion_classifier')
>>> ec("Hello, I'm a good model.")
```
### Accuracy:
We reach 83.82% for validation dataset, and 84.42% for test dataset. |
RASMUS/wav2vec2-xlsr-1b-ru | 8f9d93cf7228d7e0390b9d9917fdedb277faef2e | 2022-03-23T18:29:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ru",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"audio",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"speech",
"model-index"
]
| automatic-speech-recognition | false | RASMUS | null | RASMUS/wav2vec2-xlsr-1b-ru | 9 | null | transformers | 12,172 | ---
language: ru
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- speech
model-index:
- name: XLS-R 1B Wav2Vec2 Russian by Rasmus Toivanen
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ru
metrics:
- name: Test WER
type: wer
value: 10.83
- name: Test CER
type: cer
value: 2.41
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ru
metrics:
- name: Test WER
type: wer
value: 37.71
- name: Test CER
type: cer
value: 12.98
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ru
metrics:
- name: Test WER
type: wer
value: 31.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-ru
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- Wer: 0.0971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5462 | 0.35 | 500 | 0.4027 | 0.3575 |
| 0.498 | 0.69 | 1000 | 0.2588 | 0.2513 |
| 0.4279 | 1.04 | 1500 | 0.2265 | 0.2204 |
| 0.4099 | 1.38 | 2000 | 0.2189 | 0.1979 |
| 0.4688 | 1.73 | 2500 | 0.2100 | 0.1920 |
| 0.2241 | 2.07 | 3000 | 0.1980 | 0.1767 |
| 0.2056 | 2.42 | 3500 | 0.2020 | 0.1683 |
| 0.3423 | 2.76 | 4000 | 0.1862 | 0.1606 |
| 0.2478 | 3.11 | 4500 | 0.1787 | 0.1563 |
| 0.3079 | 3.45 | 5000 | 0.1759 | 0.1555 |
| 0.2477 | 3.8 | 5500 | 0.1713 | 0.1423 |
| 0.1718 | 4.14 | 6000 | 0.1695 | 0.1391 |
| 0.1675 | 4.49 | 6500 | 0.1677 | 0.1372 |
| 0.1631 | 4.83 | 7000 | 0.1652 | 0.1333 |
| 0.1429 | 5.18 | 7500 | 0.1605 | 0.1308 |
| 0.1505 | 5.52 | 8000 | 0.1612 | 0.1245 |
| 0.1385 | 5.87 | 8500 | 0.1487 | 0.1225 |
| 0.1285 | 6.22 | 9000 | 0.1526 | 0.1201 |
| 0.1153 | 6.56 | 9500 | 0.1464 | 0.1172 |
| 0.1159 | 6.91 | 10000 | 0.1505 | 0.1143 |
| 0.1061 | 7.25 | 10500 | 0.1444 | 0.1106 |
| 0.1016 | 7.6 | 11000 | 0.1427 | 0.1075 |
| 0.1125 | 7.94 | 11500 | 0.1386 | 0.1045 |
| 0.0937 | 8.29 | 12000 | 0.1403 | 0.1022 |
| 0.1059 | 8.63 | 12500 | 0.1406 | 0.1022 |
| 0.0857 | 8.98 | 13000 | 0.1372 | 0.0992 |
| 0.0901 | 9.32 | 13500 | 0.1380 | 0.0977 |
| 0.0913 | 9.67 | 14000 | 0.1352 | 0.0971 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
SEBIS/code_trans_t5_small_code_comment_generation_java | d91a502235dc170da9074967ef5a0d8101cf898b | 2021-06-23T09:55:51.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_comment_generation_java | 9 | null | transformers | 12,173 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code comment generation java
Pretrained model on programming language java using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on Code Comment Generation dataset.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/code%20comment%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEISHIN/distilbert-base-uncased-finetuned-ner | 15453a7ed482a01db1d1437c58381cb8c67e44b5 | 2021-12-27T07:53:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | SEISHIN | null | SEISHIN/distilbert-base-uncased-finetuned-ner | 9 | null | transformers | 12,174 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9289272666888077
- name: Recall
type: recall
value: 0.9386956035350711
- name: F1
type: f1
value: 0.933785889160917
- name: Accuracy
type: accuracy
value: 0.9842565968195466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0605
- Precision: 0.9289
- Recall: 0.9387
- F1: 0.9338
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2388 | 1.0 | 878 | 0.0671 | 0.9162 | 0.9211 | 0.9187 | 0.9813 |
| 0.0504 | 2.0 | 1756 | 0.0602 | 0.9225 | 0.9366 | 0.9295 | 0.9834 |
| 0.0299 | 3.0 | 2634 | 0.0605 | 0.9289 | 0.9387 | 0.9338 | 0.9843 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Sancha/t5-small-finetuned-fi-to-en | 401c47619f0f85e09b060e4db47db1bc5532e981 | 2021-12-05T23:36:44.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt19",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Sancha | null | Sancha/t5-small-finetuned-fi-to-en | 9 | null | transformers | 12,175 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt19
metrics:
- bleu
model-index:
- name: t5-small-finetuned-fi-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt19
type: wmt19
args: fi-en
metrics:
- name: Bleu
type: bleu
value: 1.2541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-fi-to-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5185
- Bleu: 1.2541
- Gen Len: 17.395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.413 | 1.0 | 6250 | 3.5378 | 1.2291 | 17.4057 |
| 3.342 | 2.0 | 12500 | 3.5185 | 1.2541 | 17.395 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SauravMaheshkar/clr-pretrained-roberta-base | a289bbd8900a10bd7cf7b988e9f559c680997e6a | 2021-09-23T15:58:06.000Z | [
"pytorch",
"roberta",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
]
| fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-pretrained-roberta-base | 9 | null | transformers | 12,176 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
metrics:
- Perplexity
---

# PreTraining
| **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 |
| electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 |
| electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 |
| electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 |
| distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
|
SetFit/deberta-v3-large__sst2__train-16-9 | ae356250baca330080c2736285d3b417e651e0f0 | 2022-02-10T11:39:45.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-9 | 9 | null | transformers | 12,177 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-9
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2598
- Accuracy: 0.7809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6887 | 1.0 | 7 | 0.7452 | 0.2857 |
| 0.6889 | 2.0 | 14 | 0.7988 | 0.2857 |
| 0.6501 | 3.0 | 21 | 0.8987 | 0.2857 |
| 0.4286 | 4.0 | 28 | 0.9186 | 0.4286 |
| 0.3591 | 5.0 | 35 | 0.5566 | 0.7143 |
| 0.0339 | 6.0 | 42 | 1.1130 | 0.5714 |
| 0.013 | 7.0 | 49 | 1.8296 | 0.7143 |
| 0.0041 | 8.0 | 56 | 1.7069 | 0.7143 |
| 0.0023 | 9.0 | 63 | 1.1942 | 0.7143 |
| 0.0022 | 10.0 | 70 | 0.6054 | 0.7143 |
| 0.0011 | 11.0 | 77 | 0.3872 | 0.7143 |
| 0.0006 | 12.0 | 84 | 0.3217 | 0.7143 |
| 0.0005 | 13.0 | 91 | 0.2879 | 0.8571 |
| 0.0005 | 14.0 | 98 | 0.2640 | 0.8571 |
| 0.0004 | 15.0 | 105 | 0.2531 | 0.8571 |
| 0.0003 | 16.0 | 112 | 0.2384 | 0.8571 |
| 0.0004 | 17.0 | 119 | 0.2338 | 0.8571 |
| 0.0003 | 18.0 | 126 | 0.2314 | 0.8571 |
| 0.0003 | 19.0 | 133 | 0.2276 | 0.8571 |
| 0.0003 | 20.0 | 140 | 0.2172 | 0.8571 |
| 0.0003 | 21.0 | 147 | 0.2069 | 0.8571 |
| 0.0002 | 22.0 | 154 | 0.2018 | 0.8571 |
| 0.0002 | 23.0 | 161 | 0.2005 | 0.8571 |
| 0.0002 | 24.0 | 168 | 0.1985 | 0.8571 |
| 0.0002 | 25.0 | 175 | 0.1985 | 1.0 |
| 0.0002 | 26.0 | 182 | 0.1955 | 1.0 |
| 0.0002 | 27.0 | 189 | 0.1967 | 1.0 |
| 0.0002 | 28.0 | 196 | 0.1918 | 1.0 |
| 0.0002 | 29.0 | 203 | 0.1888 | 1.0 |
| 0.0002 | 30.0 | 210 | 0.1864 | 1.0 |
| 0.0002 | 31.0 | 217 | 0.1870 | 1.0 |
| 0.0002 | 32.0 | 224 | 0.1892 | 1.0 |
| 0.0002 | 33.0 | 231 | 0.1917 | 1.0 |
| 0.0002 | 34.0 | 238 | 0.1869 | 1.0 |
| 0.0002 | 35.0 | 245 | 0.1812 | 1.0 |
| 0.0001 | 36.0 | 252 | 0.1777 | 1.0 |
| 0.0002 | 37.0 | 259 | 0.1798 | 1.0 |
| 0.0002 | 38.0 | 266 | 0.1824 | 0.8571 |
| 0.0002 | 39.0 | 273 | 0.1846 | 0.8571 |
| 0.0002 | 40.0 | 280 | 0.1839 | 0.8571 |
| 0.0001 | 41.0 | 287 | 0.1826 | 0.8571 |
| 0.0001 | 42.0 | 294 | 0.1779 | 0.8571 |
| 0.0002 | 43.0 | 301 | 0.1762 | 0.8571 |
| 0.0001 | 44.0 | 308 | 0.1742 | 1.0 |
| 0.0002 | 45.0 | 315 | 0.1708 | 1.0 |
| 0.0001 | 46.0 | 322 | 0.1702 | 1.0 |
| 0.0001 | 47.0 | 329 | 0.1699 | 1.0 |
| 0.0001 | 48.0 | 336 | 0.1695 | 1.0 |
| 0.0001 | 49.0 | 343 | 0.1683 | 1.0 |
| 0.0001 | 50.0 | 350 | 0.1681 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-8 | 869f9dfb905868850384f675c71c137ff8a12f65 | 2022-02-10T09:59:57.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-8-8 | 9 | null | transformers | 12,178 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-8
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7414
- Accuracy: 0.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6597 | 1.0 | 3 | 0.7716 | 0.25 |
| 0.6376 | 2.0 | 6 | 0.7802 | 0.25 |
| 0.5857 | 3.0 | 9 | 0.6625 | 0.75 |
| 0.4024 | 4.0 | 12 | 0.5195 | 0.75 |
| 0.2635 | 5.0 | 15 | 0.4222 | 1.0 |
| 0.1714 | 6.0 | 18 | 0.4410 | 0.5 |
| 0.1267 | 7.0 | 21 | 0.7773 | 0.75 |
| 0.0582 | 8.0 | 24 | 0.9070 | 0.75 |
| 0.0374 | 9.0 | 27 | 0.9539 | 0.75 |
| 0.0204 | 10.0 | 30 | 1.0507 | 0.75 |
| 0.012 | 11.0 | 33 | 1.2802 | 0.5 |
| 0.0086 | 12.0 | 36 | 1.4272 | 0.5 |
| 0.0049 | 13.0 | 39 | 1.4803 | 0.5 |
| 0.0039 | 14.0 | 42 | 1.4912 | 0.5 |
| 0.0031 | 15.0 | 45 | 1.5231 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-0 | 3ae2b7c08157608f27f822711e6d90beafc5d6a0 | 2022-02-09T20:17:24.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__subj__train-8-0 | 9 | null | transformers | 12,179 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4440
- Accuracy: 0.789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6868 | 0.5 |
| 0.6683 | 2.0 | 6 | 0.6804 | 0.75 |
| 0.6375 | 3.0 | 9 | 0.6702 | 0.75 |
| 0.5997 | 4.0 | 12 | 0.6686 | 0.75 |
| 0.5345 | 5.0 | 15 | 0.6720 | 0.75 |
| 0.4673 | 6.0 | 18 | 0.6646 | 0.75 |
| 0.4214 | 7.0 | 21 | 0.6494 | 0.75 |
| 0.3439 | 8.0 | 24 | 0.6313 | 0.75 |
| 0.3157 | 9.0 | 27 | 0.6052 | 0.75 |
| 0.2329 | 10.0 | 30 | 0.5908 | 0.75 |
| 0.1989 | 11.0 | 33 | 0.5768 | 0.75 |
| 0.1581 | 12.0 | 36 | 0.5727 | 0.75 |
| 0.1257 | 13.0 | 39 | 0.5678 | 0.75 |
| 0.1005 | 14.0 | 42 | 0.5518 | 0.75 |
| 0.0836 | 15.0 | 45 | 0.5411 | 0.75 |
| 0.0611 | 16.0 | 48 | 0.5320 | 0.75 |
| 0.0503 | 17.0 | 51 | 0.5299 | 0.75 |
| 0.0407 | 18.0 | 54 | 0.5368 | 0.75 |
| 0.0332 | 19.0 | 57 | 0.5455 | 0.75 |
| 0.0293 | 20.0 | 60 | 0.5525 | 0.75 |
| 0.0254 | 21.0 | 63 | 0.5560 | 0.75 |
| 0.0231 | 22.0 | 66 | 0.5569 | 0.75 |
| 0.0201 | 23.0 | 69 | 0.5572 | 0.75 |
| 0.0179 | 24.0 | 72 | 0.5575 | 0.75 |
| 0.0184 | 25.0 | 75 | 0.5547 | 0.75 |
| 0.0148 | 26.0 | 78 | 0.5493 | 0.75 |
| 0.0149 | 27.0 | 81 | 0.5473 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Shushant/distilgpt2-finetuned-nepaligpt | 0709642326f37fb07f39b7a5c6c2e7b115d855d8 | 2022-01-18T11:14:02.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | Shushant | null | Shushant/distilgpt2-finetuned-nepaligpt | 9 | null | transformers | 12,180 | Entry not found |
SimonThormeyer/movie-plot-generator-longer-plots | 9eeef143cea1ac81462bee9dd3c15f604ba60c91 | 2021-07-27T15:06:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | SimonThormeyer | null | SimonThormeyer/movie-plot-generator-longer-plots | 9 | null | transformers | 12,181 | Entry not found |
SoLID/sgd-response-generator | 0727b39c17ee8dab0ee2444f1e88a30a782fd839 | 2021-12-15T06:18:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | SoLID | null | SoLID/sgd-response-generator | 9 | null | transformers | 12,182 | Entry not found |
Sonny/distilbert-base-uncased-finetuned-squad-d5716d28 | 4a359e825cdc7c5f7e98f2a9d72c879a3403e023 | 2022-02-16T00:49:43.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | Sonny | null | Sonny/distilbert-base-uncased-finetuned-squad-d5716d28 | 9 | null | transformers | 12,183 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-BioNLP13 | 54ddb072a3319c66cfb00e2287d03f2e828e67d6 | 2022-02-23T01:33:52.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-BioNLP13 | 9 | null | transformers | 12,184 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-BioNLP13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-BioNLP13
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2217
- Precision: 0.7936
- Recall: 0.8067
- F1: 0.8001
- Accuracy: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4206 | 1.0 | 692 | 0.2182 | 0.7513 | 0.7757 | 0.7633 | 0.9342 |
| 0.1872 | 2.0 | 1384 | 0.2032 | 0.7779 | 0.7865 | 0.7821 | 0.9398 |
| 0.0982 | 3.0 | 2076 | 0.2043 | 0.7995 | 0.7904 | 0.7949 | 0.9443 |
| 0.0735 | 4.0 | 2768 | 0.2217 | 0.7936 | 0.8067 | 0.8001 | 0.9451 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
TTYU/DialoGPT-small-trump | 270b9a58376cd00e31efb4c0e6a187679f0bfcd7 | 2021-09-22T21:22:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | TTYU | null | TTYU/DialoGPT-small-trump | 9 | null | transformers | 12,185 | ---
tags:
- conversational
---
# Trump Tweets DialoGPT Model |
Tahsin-Mayeesha/bangla-fake-news-mbert | b9a6a1c334d68ccec965cb44e5bf62bf38dedad3 | 2021-08-05T14:06:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Tahsin-Mayeesha | null | Tahsin-Mayeesha/bangla-fake-news-mbert | 9 | null | transformers | 12,186 | Entry not found |
TransQuest/monotransquest-hter-en_de-it-nmt | 22b6caca61f61b029ec8ab81c97b2d45497ec581 | 2021-06-04T08:02:31.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en-de",
"transformers",
"Quality Estimation",
"monotransquest",
"hter",
"license:apache-2.0"
]
| text-classification | false | TransQuest | null | TransQuest/monotransquest-hter-en_de-it-nmt | 9 | null | transformers | 12,187 | ---
language: en-de
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_de-it-nmt", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
TransQuest/monotransquest-hter-en_de-it-smt | 432a81a60278dbca9cae8ac4858dcc2ffa9683fe | 2021-06-04T08:03:17.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en-de",
"transformers",
"Quality Estimation",
"monotransquest",
"hter",
"license:apache-2.0"
]
| text-classification | false | TransQuest | null | TransQuest/monotransquest-hter-en_de-it-smt | 9 | null | transformers | 12,188 | ---
language: en-de
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_de-it-smt", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
TransQuest/monotransquest-hter-en_lv-it-nmt | 540451de1d3738078633a933be9bad0d656684cd | 2021-06-04T08:04:48.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en-lv",
"transformers",
"Quality Estimation",
"monotransquest",
"hter",
"license:apache-2.0"
]
| text-classification | false | TransQuest | null | TransQuest/monotransquest-hter-en_lv-it-nmt | 9 | null | transformers | 12,189 | ---
language: en-lv
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_lv-it-nmt", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
Vaibhavbrkn/t5-summarization | eae0f49cdd81148bb6d37ed725b5c28fc30654c5 | 2021-06-23T10:30:16.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Vaibhavbrkn | null | Vaibhavbrkn/t5-summarization | 9 | null | transformers | 12,190 | Entry not found |
Wiam/wav2vec2-large-xlsr-arabic-demo-colab | 9836f42138aeaab3eeb02cda17244dc596337f4a | 2021-11-05T09:44:58.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Wiam | null | Wiam/wav2vec2-large-xlsr-arabic-demo-colab | 9 | null | transformers | 12,191 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-arabic-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-arabic-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Yv/bert-finetuned-ner-accelerate | 19ce295072db96a4a98175cc0d21ee29d53c5b49 | 2021-12-23T13:30:09.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Yv | null | Yv/bert-finetuned-ner-accelerate | 9 | null | transformers | 12,192 | Entry not found |
ZiweiG/ziwei-bert-imdb | 6fb1e4303c96d292346ed862d40f62b0ce277296 | 2021-05-18T22:52:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ZiweiG | null | ZiweiG/ziwei-bert-imdb | 9 | null | transformers | 12,193 | Entry not found |
aXhyra/demo_irony_31415 | bc8798e4888e72affc76f585d09671e2329c6888 | 2021-12-13T17:54:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_irony_31415 | 9 | null | transformers | 12,194 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_irony_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.685764300192161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_irony_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2905
- F1: 0.6858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.7735294032820418e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.5872 | 0.6786 |
| 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 |
| 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 |
| 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/irony_trained_1234567 | 664b47695e1919144234e7e18207b0a4cfeea7ce | 2021-12-12T12:22:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/irony_trained_1234567 | 9 | null | transformers | 12,195 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: irony_trained_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6765645067647214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6580
- F1: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6774391860025942e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6608 | 1.0 | 716 | 0.6057 | 0.6704 |
| 0.5329 | 2.0 | 1432 | 0.8935 | 0.6621 |
| 0.3042 | 3.0 | 2148 | 1.3871 | 0.6822 |
| 0.1769 | 4.0 | 2864 | 1.6580 | 0.6766 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_emotion_1234567 | 0a0715b328e85cb1bc77d362caba9764e6710e54 | 2021-12-15T10:46:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_emotion_1234567 | 9 | null | transformers | 12,196 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_emotion_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7272977042723248
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_emotion_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0237
- F1: 0.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.18796906442746e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1189 | 1.0 | 408 | 0.6827 | 0.7164 |
| 1.0678 | 2.0 | 816 | 0.6916 | 0.7396 |
| 0.6582 | 3.0 | 1224 | 0.9281 | 0.7276 |
| 0.0024 | 4.0 | 1632 | 1.0237 | 0.7273 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_emotion_31415 | 819c619842c75638f735f43f4a341ad5bde00632 | 2021-12-15T10:41:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_emotion_31415 | 9 | null | transformers | 12,197 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_emotion_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7148501877297316
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_emotion_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1243
- F1: 0.7149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.18796906442746e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.73 | 1.0 | 408 | 0.8206 | 0.6491 |
| 0.3868 | 2.0 | 816 | 0.7733 | 0.7230 |
| 0.0639 | 3.0 | 1224 | 0.9962 | 0.7101 |
| 0.0507 | 4.0 | 1632 | 1.1243 | 0.7149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_hate_31415 | 73ca7b6e75b93eaaa3ccb60cbf8ea3222d2172fa | 2021-12-15T11:24:57.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_hate_31415 | 9 | null | transformers | 12,198 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_hate_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7729508817074093
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_hate_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8632
- F1: 0.7730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.436235805743952e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.363 | 1.0 | 282 | 0.4997 | 0.7401 |
| 0.2145 | 2.0 | 564 | 0.5071 | 0.7773 |
| 0.1327 | 3.0 | 846 | 0.7109 | 0.7645 |
| 0.0157 | 4.0 | 1128 | 0.8632 | 0.7730 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_sentiment_42 | e5f87467b9d3370c794e199659e7835c2bdb3abc | 2021-12-15T13:28:22.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_sentiment_42 | 9 | null | transformers | 12,199 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_sentiment_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7175864613336908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_sentiment_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6491
- F1: 0.7176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.923967812567773e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4391 | 1.0 | 2851 | 0.6591 | 0.6953 |
| 0.6288 | 2.0 | 5702 | 0.6265 | 0.7158 |
| 0.4071 | 3.0 | 8553 | 0.6401 | 0.7179 |
| 0.6532 | 4.0 | 11404 | 0.6491 | 0.7176 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.