modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-fr-kqn | 55c48b031db6ee7fad4dba242a117a21643786a2 | 2021-09-09T21:54:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"kqn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-kqn | 10 | null | transformers | 11,500 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-kqn
* source languages: fr
* target languages: kqn
* OPUS readme: [fr-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kqn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kqn | 23.3 | 0.469 |
|
Helsinki-NLP/opus-mt-fr-no | ec2b1ec3eb5b1345cc98eda27e37eff7fe816c3d | 2021-01-18T08:45:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-no | 10 | null | transformers | 11,501 | ---
language:
- fr
- no
tags:
- translation
license: apache-2.0
---
### fra-nor
* source group: French
* target group: Norwegian
* OPUS readme: [fra-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.nor | 36.1 | 0.555 |
### System Info:
- hf_name: fra-nor
- source_languages: fra
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'no']
- src_constituents: {'fra'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: nor
- short_pair: fr-no
- chrF2_score: 0.555
- bleu: 36.1
- brevity_penalty: 0.981
- ref_len: 3089.0
- src_name: French
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: no
- prefer_old: False
- long_pair: fra-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-tl | c8a605061fcd4e667ec00cc80b77d1e39731c346 | 2021-01-18T08:48:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"tl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-tl | 10 | null | transformers | 11,502 | ---
language:
- fr
- tl
tags:
- translation
license: apache-2.0
---
### fra-tgl
* source group: French
* target group: Tagalog
* OPUS readme: [fra-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.tgl | 24.1 | 0.536 |
### System Info:
- hf_name: fra-tgl
- source_languages: fra
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'tl']
- src_constituents: {'fra'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: tgl
- short_pair: fr-tl
- chrF2_score: 0.536
- bleu: 24.1
- brevity_penalty: 1.0
- ref_len: 5778.0
- src_name: French
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: tl
- prefer_old: False
- long_pair: fra-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-wls | 97b5db4cc967b5367d2f553c0229cce465d8bb08 | 2021-09-09T21:58:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"wls",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-wls | 10 | null | transformers | 11,503 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-wls
* source languages: fr
* target languages: wls
* OPUS readme: [fr-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.wls | 27.5 | 0.478 |
|
Helsinki-NLP/opus-mt-fr-zne | 0783a82515f525e6f006e7147a18d51e2f75faa8 | 2021-09-09T21:58:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"zne",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-zne | 10 | null | transformers | 11,504 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-zne
* source languages: fr
* target languages: zne
* OPUS readme: [fr-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-zne/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.zne | 24.1 | 0.460 |
|
Helsinki-NLP/opus-mt-ig-de | 0084b69aec8c759aaa05592862d9aef0772b7e37 | 2021-09-09T22:11:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ig",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ig-de | 10 | null | transformers | 11,505 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ig-de
* source languages: ig
* target languages: de
* OPUS readme: [ig-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.de | 20.1 | 0.393 |
|
Helsinki-NLP/opus-mt-ig-fi | 240902f320cdf164915020b4b3a0e29af35f65f2 | 2021-09-09T22:11:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ig",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ig-fi | 10 | null | transformers | 11,506 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ig-fi
* source languages: ig
* target languages: fi
* OPUS readme: [ig-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.fi | 23.5 | 0.451 |
|
Helsinki-NLP/opus-mt-ilo-sv | beadc79a61a1a0a6c7080a36b62a82e61753e27b | 2021-09-09T22:12:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ilo",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ilo-sv | 10 | null | transformers | 11,507 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ilo-sv
* source languages: ilo
* target languages: sv
* OPUS readme: [ilo-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ilo-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ilo-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ilo.sv | 31.9 | 0.515 |
|
Helsinki-NLP/opus-mt-lg-es | ffcb8472817743ce83729e165416716259784ce3 | 2021-09-10T13:54:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lg",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lg-es | 10 | null | transformers | 11,508 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-es
* source languages: lg
* target languages: es
* OPUS readme: [lg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.es | 22.1 | 0.393 |
|
Helsinki-NLP/opus-mt-lg-fr | 81884e060814b3278945b53a7598601e4fb17bea | 2021-09-10T13:54:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lg",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lg-fr | 10 | null | transformers | 11,509 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-fr
* source languages: lg
* target languages: fr
* OPUS readme: [lg-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.fr | 23.7 | 0.406 |
|
Helsinki-NLP/opus-mt-lg-sv | ec39a6f639e22be00ea1ee10296db2105b27cec9 | 2021-09-10T13:54:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lg",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lg-sv | 10 | null | transformers | 11,510 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lg-sv
* source languages: lg
* target languages: sv
* OPUS readme: [lg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lg.sv | 24.5 | 0.423 |
|
Helsinki-NLP/opus-mt-lt-ru | 9b62456ce3d1e83fc114841f14c6ebb90abbad0a | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lt",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lt-ru | 10 | null | transformers | 11,511 | ---
language:
- lt
- ru
tags:
- translation
license: apache-2.0
---
### lit-rus
* source group: Lithuanian
* target group: Russian
* OPUS readme: [lit-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md)
* model: transformer-align
* source language(s): lit
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lit.rus | 51.7 | 0.695 |
### System Info:
- hf_name: lit-rus
- source_languages: lit
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'ru']
- src_constituents: {'lit'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt
- src_alpha3: lit
- tgt_alpha3: rus
- short_pair: lt-ru
- chrF2_score: 0.695
- bleu: 51.7
- brevity_penalty: 0.982
- ref_len: 15395.0
- src_name: Lithuanian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: lt
- tgt_alpha2: ru
- prefer_old: False
- long_pair: lit-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-lu-fi | 5314ef637cb45a491756e68c8c35331a5b72cc0d | 2021-09-10T13:55:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lu",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lu-fi | 10 | null | transformers | 11,512 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lu-fi
* source languages: lu
* target languages: fi
* OPUS readme: [lu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lu.fi | 21.4 | 0.442 |
|
Helsinki-NLP/opus-mt-lv-fr | cc6608772f63d05ccae0651fe335cd5d561aee0a | 2021-09-10T13:57:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lv",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lv-fr | 10 | null | transformers | 11,513 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lv-fr
* source languages: lv
* target languages: fr
* OPUS readme: [lv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.fr | 22.1 | 0.437 |
|
Helsinki-NLP/opus-mt-mfe-es | 21ab6da94608acb3e37d8fe567aab658a519ea05 | 2021-09-10T13:57:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mfe",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mfe-es | 10 | null | transformers | 11,514 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mfe-es
* source languages: mfe
* target languages: es
* OPUS readme: [mfe-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfe-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mfe.es | 24.0 | 0.418 |
|
Helsinki-NLP/opus-mt-niu-es | e14ce7ed4bb8c8cf30eee96583b6be99b7397047 | 2021-09-10T13:58:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"niu",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-niu-es | 10 | null | transformers | 11,515 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-es
* source languages: niu
* target languages: es
* OPUS readme: [niu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.es | 24.2 | 0.419 |
|
Helsinki-NLP/opus-mt-no-fi | c2078a17f749f08c71b710ce555f34cf79a6b874 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"no",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-no-fi | 10 | null | transformers | 11,516 | ---
language:
- no
- fi
tags:
- translation
license: apache-2.0
---
### nor-fin
* source group: Norwegian
* target group: Finnish
* OPUS readme: [nor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fin | 14.1 | 0.374 |
### System Info:
- hf_name: nor-fin
- source_languages: nor
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fi']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fin
- short_pair: no-fi
- chrF2_score: 0.374
- bleu: 14.1
- brevity_penalty: 0.894
- ref_len: 13066.0
- src_name: Norwegian
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fi
- prefer_old: False
- long_pair: nor-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-pon-sv | 13c611e9f67915672c3b470b6b316c8da68395b3 | 2021-09-10T14:01:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pon",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pon-sv | 10 | null | transformers | 11,517 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pon-sv
* source languages: pon
* target languages: sv
* OPUS readme: [pon-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.sv | 26.4 | 0.436 |
|
Helsinki-NLP/opus-mt-ru-no | 33660009041320d06a1c6b3f6df6956d11e19536 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-no | 10 | null | transformers | 11,518 | ---
language:
- ru
- no
tags:
- translation
license: apache-2.0
---
### rus-nor
* source group: Russian
* target group: Norwegian
* OPUS readme: [rus-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.nor | 20.3 | 0.418 |
### System Info:
- hf_name: rus-nor
- source_languages: rus
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'no']
- src_constituents: {'rus'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: nor
- short_pair: ru-no
- chrF2_score: 0.418
- bleu: 20.3
- brevity_penalty: 0.946
- ref_len: 11686.0
- src_name: Russian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: no
- prefer_old: False
- long_pair: rus-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-run-sv | 6b11d88bbbce3e9f7e1bcc2aba07f3f560c5984b | 2021-09-10T14:02:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"run",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-run-sv | 10 | null | transformers | 11,519 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-run-sv
* source languages: run
* target languages: sv
* OPUS readme: [run-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.sv | 30.1 | 0.484 |
|
Helsinki-NLP/opus-mt-sl-fr | 89c07fc004f006c2eca854470b3ca27c9db90d73 | 2021-09-10T14:03:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sl",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sl-fr | 10 | null | transformers | 11,520 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sl-fr
* source languages: sl
* target languages: fr
* OPUS readme: [sl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sl.fr | 25.0 | 0.475 |
|
Helsinki-NLP/opus-mt-sv-hu | a7d71025801a08b0d92489911b069d1b40441b61 | 2021-09-10T14:07:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"hu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-hu | 10 | null | transformers | 11,521 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-hu
* source languages: sv
* target languages: hu
* OPUS readme: [sv-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.sv.hu | 44.6 | 0.660 |
|
Helsinki-NLP/opus-mt-sv-ln | 01e4ca35440881a1562eccc8d0186ac35cb4f0c8 | 2021-09-10T14:07:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"ln",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-ln | 10 | null | transformers | 11,522 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ln
* source languages: sv
* target languages: ln
* OPUS readme: [sv-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ln | 30.6 | 0.541 |
|
Helsinki-NLP/opus-mt-sv-mh | e7d142d7f3ae77e1f6baddb6256fe28692dbcb1d | 2021-09-10T14:08:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"mh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-mh | 10 | null | transformers | 11,523 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-mh
* source languages: sv
* target languages: mh
* OPUS readme: [sv-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-mh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-mh/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mh/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mh/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.mh | 23.8 | 0.434 |
|
Helsinki-NLP/opus-mt-sv-tll | 40d13c4054533ec14f4e1660377cc27b18a9c687 | 2021-09-10T14:09:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"tll",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-tll | 10 | null | transformers | 11,524 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-tll
* source languages: sv
* target languages: tll
* OPUS readme: [sv-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.tll | 24.9 | 0.484 |
|
Helsinki-NLP/opus-mt-sv-wls | 42268c920e747a098122e32f0711e8ac7f66f057 | 2021-09-11T10:47:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"wls",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-wls | 10 | null | transformers | 11,525 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-wls
* source languages: sv
* target languages: wls
* OPUS readme: [sv-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-wls/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-wls/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-wls/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.wls | 29.0 | 0.501 |
|
Helsinki-NLP/opus-mt-tll-fi | b876f0d43685f5fd0490cbb3109a7d2864bf433b | 2021-09-11T10:48:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tll",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tll-fi | 10 | null | transformers | 11,526 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tll-fi
* source languages: tll
* target languages: fi
* OPUS readme: [tll-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.fi | 22.4 | 0.441 |
|
Helsinki-NLP/opus-mt-toi-fi | db07f7eb3dba7c694bb22b25b73255a88b63f801 | 2021-09-11T10:49:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"toi",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-toi-fi | 10 | null | transformers | 11,527 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-toi-fi
* source languages: toi
* target languages: fi
* OPUS readme: [toi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.fi | 24.5 | 0.464 |
|
Helsinki-NLP/opus-mt-toi-fr | d7a461058f800c6ac29a3b7fe6a0e28996de999b | 2021-09-11T10:49:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"toi",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-toi-fr | 10 | null | transformers | 11,528 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-toi-fr
* source languages: toi
* target languages: fr
* OPUS readme: [toi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.fr | 26.5 | 0.432 |
|
Helsinki-NLP/opus-mt-tpi-sv | d0d216865b2c4453fe2a808ad27e18d1e5ca837c | 2021-09-11T10:49:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tpi",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tpi-sv | 10 | null | transformers | 11,529 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tpi-sv
* source languages: tpi
* target languages: sv
* OPUS readme: [tpi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tpi.sv | 21.6 | 0.396 |
|
Helsinki-NLP/opus-mt-ts-sv | 78a6b6239f0eadd655e7fb7422a45f0bfb366546 | 2021-09-11T10:50:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ts",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ts-sv | 10 | null | transformers | 11,530 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ts-sv
* source languages: ts
* target languages: sv
* OPUS readme: [ts-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.sv | 32.6 | 0.510 |
|
Helsinki-NLP/opus-mt-tvl-es | 197469fa64475e4a3b739029520b04f620e2ea63 | 2021-09-11T10:50:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tvl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tvl-es | 10 | null | transformers | 11,531 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tvl-es
* source languages: tvl
* target languages: es
* OPUS readme: [tvl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.es | 21.0 | 0.388 |
|
Helsinki-NLP/opus-mt-ty-fr | 61b8a458936fa62d85b4cb3ad00046ec4dd1d876 | 2021-09-11T10:51:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ty",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ty-fr | 10 | null | transformers | 11,532 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ty-fr
* source languages: ty
* target languages: fr
* OPUS readme: [ty-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ty.fr | 30.2 | 0.480 |
|
Helsinki-NLP/opus-mt-wls-sv | 012ca81a2882b73e06e7f552611d28a7bcfe1bcf | 2021-09-11T10:52:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"wls",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-wls-sv | 10 | null | transformers | 11,533 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-wls-sv
* source languages: wls
* target languages: sv
* OPUS readme: [wls-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.sv | 23.8 | 0.408 |
|
Herais/pred_genre | ec5f0318f7519e4b73c4915a0bd32a5a805c37d8 | 2022-02-27T05:26:29.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:Custom",
"transformers",
"classification",
"license:apache-2.0"
]
| text-classification | false | Herais | null | Herais/pred_genre | 10 | null | transformers | 11,534 | ---
language:
- zh
tags:
- classification
license: apache-2.0
datasets:
- Custom
metrics:
- rouge
---
This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_genre"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0,
'其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6,
'科幻': 9, '神话': 8, '宫廷': 5}
id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇',
2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打',
9: '科幻', 8: '神话', 5: '宫廷'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['涉案']
Citation
TBA |
Intel/bert-large-uncased-squadv1.1-sparse-90-unstructured | 056596aaf8ad1bb9844169dbabbfb5c723d36b71 | 2021-12-05T13:31:53.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"arxiv:2111.05754",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Intel | null | Intel/bert-large-uncased-squadv1.1-sparse-90-unstructured | 10 | null | transformers | 11,535 | ---
language: en
---
# 90% Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1
This model is a result of fine-tuning a Prune OFA 90% sparse pre-trained BERT-Large combined with knowledge distillation.
This model yields the following results on SQuADv1.1 development set:<br>
`{"exact_match": 83.56669820245979, "f1": 90.20829352733487}`
For further details see our paper, [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754), and our open source implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
ItcastAI/bert_finetuning_test | 00861c609c1f72d8f14f0dfdfaf0fe2206330005 | 2021-05-18T21:12:26.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ItcastAI | null | ItcastAI/bert_finetuning_test | 10 | null | transformers | 11,536 | Entry not found |
JIWON/bert-base-finetuned-nli | a67fa4db674f8d398db3b018608b72978c997968 | 2022-02-07T00:29:00.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | JIWON | null | JIWON/bert-base-finetuned-nli | 10 | null | transformers | 11,537 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
model-index:
- name: bert-base-finetuned-nli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: nli
metrics:
- name: Accuracy
type: accuracy
value: 0.085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6210
- Accuracy: 0.085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.6210 | 0.085 |
| No log | 2.0 | 392 | 0.5421 | 0.0643 |
| 0.5048 | 3.0 | 588 | 0.5523 | 0.062 |
| 0.5048 | 4.0 | 784 | 0.5769 | 0.0533 |
| 0.5048 | 5.0 | 980 | 0.5959 | 0.052 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
LysandreJik/test-upload1 | 135b14e62bd062e7a2ccf68baef20f4e66e670e1 | 2022-01-28T23:09:48.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | LysandreJik | null | LysandreJik/test-upload1 | 10 | null | transformers | 11,538 | Entry not found |
JuliusAlphonso/dear-jarvis-v5 | 3c40ddbc89448888911bcd168fdd2b691072dcf3 | 2021-06-20T06:59:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | JuliusAlphonso | null | JuliusAlphonso/dear-jarvis-v5 | 10 | null | transformers | 11,539 | ---
license: apache-2.0
datasets:
- null
model_index:
- name: dear-jarvis-v5
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dear-jarvis-v5
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 470 | 0.3106 |
| 0.3452 | 2.0 | 940 | 0.3064 |
| 0.2692 | 3.0 | 1410 | 0.3148 |
### Framework versions
- Transformers 4.7.0
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
Keqipig/DialoGPT-small-spamton | da31e6713d373d0936ee617ac92ae08a47d79a6d | 2022-01-03T22:32:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Keqipig | null | Keqipig/DialoGPT-small-spamton | 10 | null | transformers | 11,540 | ---
tags:
- conversational
---
@ Spamton G. Spamton DialoGPT Model |
Khanh/bert-base-multilingual-cased-finetuned-squad | 30561a865c943b7fcfb3680731a3e2ef3d816fd8 | 2022-01-04T14:51:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Khanh | null | Khanh/bert-base-multilingual-cased-finetuned-squad | 10 | null | transformers | 11,541 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1782 | 1.0 | 579 | 0.5258 |
| 0.4938 | 2.0 | 1158 | 0.4639 |
| 0.32 | 3.0 | 1737 | 0.4919 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Kithogue/T5_Question_Generation | 4787357045e77830cef7e03b8ca28f6c937a7bdf | 2021-12-05T15:05:13.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Kithogue | null | Kithogue/T5_Question_Generation | 10 | null | transformers | 11,542 | T5-base fine-tuned on SQuAD and CoQA datasets for question generation
language:
- en-us
tags:
- question-generation
license:
- MIT
datasets:
- SQuAD 2.0
- CoQA |
KoichiYasuoka/roberta-large-japanese-aozora-char | ac5b863772f16bc0390bee4519d53d32551a2dd6 | 2022-06-22T01:22:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-large-japanese-aozora-char | 10 | null | transformers | 11,543 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# roberta-large-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-large-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-char-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora-char")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
|
LoudlySoft/scibert_scivocab_uncased_squad | 868a1bbceb58647ba779031db0a4f491268abbab | 2021-05-18T21:28:54.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | LoudlySoft | null | LoudlySoft/scibert_scivocab_uncased_squad | 10 | null | transformers | 11,544 | ## AllenAI's <i>scibert_scivocab_uncased</i> fine-tuned on SQuAD 2.0 evaluated with F1 = 86.85
#### To load the model:
```
from transformers import BertTokenizerFast
from transformers import BertForQuestionAnswering
tokenizer = BertTokenizerFast.from_pretrained("LoudlySoft/scibert_scivocab_uncased_squad")
model = BertForQuestionAnswering.from_pretrained("LoudlySoft/scibert_scivocab_uncased_squad")
``` |
Maaly/bgc-accession | 971d9914caeb45ec4b517f8d7735c6f0cc004ad5 | 2022-05-28T15:34:44.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Maaly | null | Maaly/bgc-accession | 10 | null | transformers | 11,545 | bgc-accession model is a Named Entity Recognition (NER) model that identifies and annotates the accession number of biosynthetic gene clusters in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_bgcs_annotations
Testing examples:
1. The genome sequences of Leptolyngbya sp. PCC 7375 (ALVN00000000) and G. sunshinyii YC6258 (NZ_CP007142.1) were obtained previously.36,59
2. K311 was sequenced (NCBI accession number: JN852959) and analyzed with FramePlot and 18 genes were predicted to be involved in echinomycin biosynthesis (Figure 2).
3. The mar cluster was sequenced and annotated and the complete sequence was deposited into Genbank (accession KF711829). |
Media1129/keyword-tag-model-2000-9-16 | 220e61e0b7590ac850ad5e69d0feee0d3d9b7952 | 2021-09-16T16:51:08.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-2000-9-16 | 10 | null | transformers | 11,546 | Entry not found |
Media1129/keyword-tag-model-2000-9-16_more_ingredient | 8a05e959a6ed4537e713f0acd76220bd9ae09e0c | 2021-09-17T01:50:36.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-2000-9-16_more_ingredient | 10 | null | transformers | 11,547 | Entry not found |
MiBo/SADistilGPT2 | ffcc7a387e9941fd4168241153e684c3da508bf2 | 2021-07-06T23:31:25.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | false | MiBo | null | MiBo/SADistilGPT2 | 10 | null | transformers | 11,548 | Entry not found |
MilkyLatte/q-g-model | 99ee4086d2f096b3ee9267abf6a3f5e7a381b94b | 2021-06-23T03:19:25.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | MilkyLatte | null | MilkyLatte/q-g-model | 10 | null | transformers | 11,549 | Entry not found |
MoritzLaurer/MiniLM-L6-mnli-binary | dcf5730f33554768ee718c26a94ae31afbe6583e | 2021-12-13T10:37:22.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"zero-shot-classification"
]
| text-classification | false | MoritzLaurer | null | MoritzLaurer/MiniLM-L6-mnli-binary | 10 | null | transformers | 11,550 | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "I liked the movie. [SEP] The movie was good."
---
# MiniLM-L6-mnli-binary
## Model description
This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset. The model was trained for binary NLI, which means that the "neutral" and "contradiction" classes were merged into one class. The model therefore predicts "entailment" or "not_entailment".
The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/MiniLM-L6-mnli-binary"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I liked the movie"
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
[MultiNLI](https://huggingface.co/datasets/multi_nli).
### Training procedure
MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary (matched) test set from MultiNLI. Accuracy: 0.886
## Limitations and bias
Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. |
Muennighoff/SBERT-large-nli-v2 | 9bcc1af97540b7799b2e42f10e4f926d7aea7011 | 2022-02-21T06:16:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | Muennighoff | null | Muennighoff/SBERT-large-nli-v2 | 10 | null | sentence-transformers | 11,551 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# SBERT-large-nli-v2
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
NDugar/v3large-1epoch | 7f9e5b4db644007b9a84739961ee40d0b4c7c2ff | 2021-12-06T20:04:26.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
]
| zero-shot-classification | false | NDugar | null | NDugar/v3large-1epoch | 10 | null | transformers | 11,552 | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` |
NamPE/DialoGPT-medium-Aqua-konosuba | 57912f57fd281aac354d9abcd52bb0fc626fb26c | 2022-01-01T16:35:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | NamPE | null | NamPE/DialoGPT-medium-Aqua-konosuba | 10 | null | transformers | 11,553 | ---
tags:
- conversational
---
# Aqua from Konosuba DialoGPT Model |
NbAiLab/roberta_jan_128_ncc | c2f407263c11c2837aa33f87b146123efb55103c | 2022-02-04T09:42:27.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | NbAiLab | null | NbAiLab/roberta_jan_128_ncc | 10 | null | transformers | 11,554 | Entry not found |
Nehc/adpatres | f708fa11f6bf438d7fb8c60169a6d1a30208abac | 2021-10-21T05:40:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers"
]
| text-generation | false | Nehc | null | Nehc/adpatres | 10 | null | transformers | 11,555 | ---
language:
- ru
widget:
- text: "Смерти нет, "
---
not for use...
technical data |
PolyakovMaxim/ModelGptTS | 6633a3616cfaf091a1f4e51668e6aa10e03d6f8b | 2021-11-01T11:46:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | PolyakovMaxim | null | PolyakovMaxim/ModelGptTS | 10 | null | transformers | 11,556 | This model generate the time shift's text of Norbit Company also generate the same ending of the textes of any phrases like base gpt model. |
Pyke/bart-finetuned-on-patent-Deepspeed-DS-1 | 7ac216e4a3164420146f1c033bcea119d8edcbe3 | 2021-08-18T02:33:27.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Pyke | null | Pyke/bart-finetuned-on-patent-Deepspeed-DS-1 | 10 | null | transformers | 11,557 | Entry not found |
Ratul/sci_ner | c027061cf895977933767becc4fefc351952f2be | 2021-06-01T08:48:27.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Ratul | null | Ratul/sci_ner | 10 | null | transformers | 11,558 | Entry not found |
Rostlab/prot_electra_generator_bfd | 7095cb2c0d0689c589dfb2e4dddc4e39ffe4f7dc | 2020-12-18T20:15:23.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Rostlab | null | Rostlab/prot_electra_generator_bfd | 10 | null | transformers | 11,559 | Entry not found |
RuudVelo/wav2vec2-large-xls-r-1b-nl-lm | a18038591d57e6ca83ca8662246f83f03d4eefde | 2022-03-24T11:55:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | RuudVelo | null | RuudVelo/wav2vec2-large-xls-r-1b-nl-lm | 10 | null | transformers | 11,560 | ---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- nl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-1b-nl-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 9.73
- name: Test CER
type: cer
value: 2.89
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 27.27
- name: Test CER
type: cer
value: 13.23
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 27.67
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-nl-lm
This model is a fine-tuned version of [wav2vec2-large-xls-r-1b-nl-lm](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice 8 dataset.
It achieves the following results on the test set:
- Loss: 0.1479
- Wer: 0.1156
Note that the above test results come from the original model without LM (language model) which can be found at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-nl. The results with the LM model can be found on the right side of this model card.
## Model description
Model RuudVelo/wav2vec2-large-xls-r-1b-nl which has been improved with a KenLM 5-gram.
## Intended uses & limitations
More information needed
## Training and evaluation data
Common Voice 8 nl dataset has been used for the model
## Training procedure
### Training hyperparameters
Parameters can be found in the run.sh file at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-nl
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 |
SEBIS/code_trans_t5_base_code_documentation_generation_java_transfer_learning_finetune | 14b0317974f9b7eeee4b71b1d5563d38829a3238 | 2021-06-23T04:26:33.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_java_transfer_learning_finetune | 10 | null | transformers | 11,561 | ---
tags:
- summarization
widget:
- text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_java_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/java/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune | 7e1a54dc9be62a8b0cf15b13ba5a582be3e7138f | 2021-06-23T08:34:21.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune | 10 | null | transformers | 11,562 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the git commit message generation task for the java commit changes.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/commit%20generation/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_trans_en_it_small_finetuned | 77357a9844444db58ac01b6364f14f27969fb9b9 | 2021-06-23T09:39:10.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Italian model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_it_small_finetuned | 10 | null | transformers | 11,563 |
---
language: English Italian
tags:
- translation English Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Preventing and combating trafficking in human beings, and protecting victims"
---
# legal_t5_small_trans_en_it_small_finetuned model
Model on translating legal text from English to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_it_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_it_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Italian.
### How to use
Here is how to use this model to translate legal text from English to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_it_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "Preventing and combating trafficking in human beings, and protecting victims"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_it_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_it_small_finetuned | 46.887|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
Saitomar/wav2vec2-large-xls-r-300m-bengali-kaggle | de14db5614b8ec36cac09245db1f4c3be4700bb4 | 2022-02-07T09:16:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Saitomar | null | Saitomar/wav2vec2-large-xls-r-300m-bengali-kaggle | 10 | null | transformers | 11,564 | Entry not found |
SetFit/deberta-v3-large__sst2__train-16-5 | 9abc4039f0cc7cb738f1c19f05e22da14559db12 | 2022-02-10T10:56:06.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/deberta-v3-large__sst2__train-16-5 | 10 | null | transformers | 11,565 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-5
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5433
- Accuracy: 0.7924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6774 | 1.0 | 7 | 0.7450 | 0.2857 |
| 0.7017 | 2.0 | 14 | 0.7552 | 0.2857 |
| 0.6438 | 3.0 | 21 | 0.7140 | 0.4286 |
| 0.3525 | 4.0 | 28 | 0.5570 | 0.7143 |
| 0.2061 | 5.0 | 35 | 0.5303 | 0.8571 |
| 0.0205 | 6.0 | 42 | 0.6706 | 0.8571 |
| 0.0068 | 7.0 | 49 | 0.8284 | 0.8571 |
| 0.0029 | 8.0 | 56 | 0.9281 | 0.8571 |
| 0.0015 | 9.0 | 63 | 0.9871 | 0.8571 |
| 0.0013 | 10.0 | 70 | 1.0208 | 0.8571 |
| 0.0008 | 11.0 | 77 | 1.0329 | 0.8571 |
| 0.0005 | 12.0 | 84 | 1.0348 | 0.8571 |
| 0.0004 | 13.0 | 91 | 1.0437 | 0.8571 |
| 0.0005 | 14.0 | 98 | 1.0512 | 0.8571 |
| 0.0004 | 15.0 | 105 | 1.0639 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Shahm/bert-court-german | 728f04d15772c44f46ebcbd98eb94af26cbb218c | 2021-12-31T21:21:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Shahm | null | Shahm/bert-court-german | 10 | null | transformers | 11,566 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: plus-bert-court-90k-end-german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plus-bert-court-90k-end-german
This model is a fine-tuned version of [Shahm/plus-bert-court-50k-90k-german](https://huggingface.co/Shahm/plus-bert-court-50k-90k-german) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.11.0
|
Shauli/RE-metric-model-spike | 31394bf0fe472b4ac5e49d85a8d16d0c8fdd85ed | 2021-05-18T22:36:05.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Shauli | null | Shauli/RE-metric-model-spike | 10 | null | transformers | 11,567 | Entry not found |
Spirax/DialoGPT-medium-sheldon | eed9e6ee5e919321c64bc4799444637d26f2a5e4 | 2021-07-21T21:03:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
]
| conversational | false | Spirax | null | Spirax/DialoGPT-medium-sheldon | 10 | null | transformers | 11,568 | ---
thumbnail: https://i.imgur.com/7HAcbbD.gif
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a TV Series Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a TV series character, Sheldon from [The Big Bang Theory](https://en.wikipedia.org/wiki/The_Big_Bang_Theory). The data comes from [a Kaggle TV series script dataset](https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript).
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("spirax/DialoGPT-medium-sheldon")
model = AutoModelWithLMHead.from_pretrained("spirax/DialoGPT-medium-sheldon")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("SheldorBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
StevenLimcorn/indo-roberta-indonli | 638a50452c733bf4c77c9e81a51cc9ce5d998e63 | 2021-11-11T09:03:59.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"id",
"dataset:indonli",
"transformers",
"license:mit"
]
| text-classification | false | StevenLimcorn | null | StevenLimcorn/indo-roberta-indonli | 10 | null | transformers | 11,569 | ---
language: id
tags:
- roberta
license: mit
datasets:
- indonli
widget:
- text: "Amir Sjarifoeddin Harahap lahir di Kota Medan, Sumatera Utara, 27 April 1907. Ia meninggal di Surakarta, Jawa Tengah, pada 19 Desember 1948 dalam usia 41 tahun. </s></s> Amir Sjarifoeddin Harahap masih hidup."
---
## Indo-roberta-indonli
Indo-roberta-indonli is natural language inference classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the trained on [IndoNLI](https://github.com/ir-nlp-csui/indonli/tree/main/data/indonli) dataset. The model used was [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) and was transfer-learned to a natural inference classifier model. The model are tested using the validation, test_layer and test_expert dataset given in the github repository. The results are shown below.
### Result
| Dataset | Accuracy | F1 | Precision | Recall |
|-------------|----------|---------|-----------|---------|
| Test Lay | 0.74329 | 0.74075 | 0.74283 | 0.74133 |
| Test Expert | 0.6115 | 0.60543 | 0.63924 | 0.61742 |
## Model
The model was trained on with 5 epochs, batch size 16, learning rate 2e-5 and weight decay 0.01. Achieved different metrics as shown below.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|----------|
| 1 | 0.942500 | 0.658559 | 0.737369 | 0.735552 | 0.735488 | 0.736679 |
| 2 | 0.649200 | 0.645290 | 0.761493 | 0.759593 | 0.762784 | 0.759642 |
| 3 | 0.437100 | 0.667163 | 0.766045 | 0.763979 | 0.765740 | 0.763792 |
| 4 | 0.282000 | 0.786683 | 0.764679 | 0.761802 | 0.762011 | 0.761684 |
| 5 | 0.193500 | 0.925717 | 0.765134 | 0.763127 | 0.763560 | 0.763489 |
## How to Use
### As NLI Classifier
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/indonesian-roberta-indonli"
nlp = pipeline(
"zero-shot-classification",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Amir Sjarifoeddin Harahap lahir di Kota Medan, Sumatera Utara, 27 April 1907. Ia meninggal di Surakarta, Jawa Tengah, pada 19 Desember 1948 dalam usia 41 tahun. </s></s> Amir Sjarifoeddin Harahap masih hidup.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `INDONLI` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base IndoNLI was trained and evaluated by [Steven Limcorn](https://github.com/stevenlimcorn). All computation and development are done on Google Colaboratory using their free GPU access.
## Reference
The dataset we used is by IndoNLI.
```
@inproceedings{indonli,
title = "IndoNLI: A Natural Language Inference Dataset for Indonesian",
author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
}
``` |
ThaiUWA/gpt-2-josh-uwa | 2af0d04808c741c2738d5162f11b682bf4b1014e | 2021-05-21T11:18:58.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ThaiUWA | null | ThaiUWA/gpt-2-josh-uwa | 10 | null | transformers | 11,570 | Entry not found |
Xenova/sponsorblock-base-v1.1 | 1850c9dc36b7f10adf0447a8e58f497dec710517 | 2022-02-12T22:04:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Xenova | null | Xenova/sponsorblock-base-v1.1 | 10 | 1 | transformers | 11,571 | Entry not found |
Xeouz/Ultron-Small | c93a401050d43034d807ad87d7e273695837ee6e | 2021-10-09T08:22:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Xeouz | null | Xeouz/Ultron-Small | 10 | null | transformers | 11,572 | ---
tags:
- conversational
---
# Ultron Small |
ZYW/squad-mbert-model | cbfb2a0a0e99f04d6d24c680c2057c4f2ef8158a | 2021-05-30T15:15:53.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"model-index",
"autotrain_compatible"
]
| question-answering | false | ZYW | null | ZYW/squad-mbert-model | 10 | null | transformers | 11,573 | ---
model-index:
- name: squad-mbert-model
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-mbert-model
This model was trained from scratch on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.7.0
- Tokenizers 0.10.3
|
ZiweiG/ziwei-bertimdb-prob | 109776a72df5189f198a10bdd7b79f083a0819cf | 2021-05-18T22:53:05.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ZiweiG | null | ZiweiG/ziwei-bertimdb-prob | 10 | null | transformers | 11,574 | Entry not found |
aXhyra/demo_sentiment_42 | 44d52a07515bf5c860ed9b512b5511d9dd5c4a54 | 2021-12-13T22:41:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/demo_sentiment_42 | 10 | null | transformers | 11,575 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_sentiment_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7113620044371958
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_sentiment_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6332
- F1: 0.7114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.62486660723695e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 |
| 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 |
| 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 |
| 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/irony_trained_final | f176d328c63a97ca4a3b542efd79d33236e9cb2a | 2021-12-12T10:28:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/irony_trained_final | 10 | null | transformers | 11,576 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: irony_trained_final
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: irony
metrics:
- name: F1
type: f1
value: 0.6879413493337545
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_trained_final
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4770
- F1: 0.6879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.842398023893579e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6852 | 1.0 | 716 | 0.6488 | 0.6530 |
| 0.6263 | 2.0 | 1432 | 0.7647 | 0.6511 |
| 0.4511 | 3.0 | 2148 | 1.2251 | 0.6764 |
| 0.2578 | 4.0 | 2864 | 1.4770 | 0.6879 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aXhyra/presentation_emotion_42 | 4404d8e9173083ccf1e43fbd242a6a40654b6e13 | 2021-12-15T10:36:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | aXhyra | null | aXhyra/presentation_emotion_42 | 10 | null | transformers | 11,577 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_emotion_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.732897530282475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_emotion_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0989
- F1: 0.7329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.18796906442746e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3703 | 1.0 | 408 | 0.6624 | 0.7029 |
| 0.2122 | 2.0 | 816 | 0.6684 | 0.7258 |
| 0.9452 | 3.0 | 1224 | 1.0001 | 0.7041 |
| 0.0023 | 4.0 | 1632 | 1.0989 | 0.7329 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
abhiramtirumala/DialoGPT-sarcastic-medium | 7b88eb0aec7fc096164d3dc80d54ff95bdfc6304 | 2021-05-27T21:33:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | abhiramtirumala | null | abhiramtirumala/DialoGPT-sarcastic-medium | 10 | null | transformers | 11,578 | Entry not found |
abhishek/autonlp-hindi-question-answering-23865268 | f134199adaa376a2051bd6e6f8251fbeb53ba623 | 2021-10-21T13:51:44.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"hi",
"dataset:abhishek/autonlp-data-hindi-question-answering",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| question-answering | false | abhishek | null | abhishek/autonlp-hindi-question-answering-23865268 | 10 | 3 | transformers | 11,579 | ---
tags:
- autonlp
- question-answering
language: hi
widget:
- text: "´सतीश धवन अंतरिक्ष केंद्र´ किस राज्य में स्थित है?"
context: "सतीश धवन अंतरिक्ष केंद्र, भारतीय अंतरिक्ष अनुसंधान संगठन (इसरो) का प्रक्षेपण केंद्र है। यह आंध्र प्रदेश के श्रीहरीकोटा में स्थित है, इसे 'श्रीहरीकोटा रेंज' या 'श्रीहरीकोटा लाँचिंग रेंज' के नाम से भी जाना जाता है। 2002 में इसरो के पूर्व प्रबंधक और वैज्ञानिक सतीश धवन के मरणोपरांत उनके सम्मान में इसका नाम बदला गया। प्रक्षेपण यान की असेम्\u200dबली के लिए दूसरा भवन केन्\u200dद्रीय मंत्रिमंडल ने 12 सितम्\u200dबर, 2013 को सतीश धवन अंतरिक्ष केन्\u200dद्र, श्रीहरिकोटा में प्रक्षेपण यान की असेम्\u200dबली के लिए दूसरे भवन के निर्माण की मंजूरी दी। इस पर 363.95 करोड़ रुपये की अनुमानित लागत आएगी, जिसमें सात करोड़ रुपये का खर्च विदेशी मुद्रा में होगा। इस दूसरी बिल्डिंग के उपलब्\u200dध हो जाने से पीएसएलवी और जीएसएलवी की प्रक्षेपण फ्रीक्वेंसी बढ़ेगी। यह जीएसएलवी एमके-III के एकीकरण के लिए वर्तमान व्\u200dहीकल असेम्\u200dबली बिल्डिंग को अतिरिक्\u200dत सुविधा मुहैया करायेगी। तीसरे प्रक्षेपण पैड तथा भविष्\u200dय में सामान्\u200dय यान प्रक्षेपण के लिए भी इससे काफी सुविधा मिलेगी।[1]\nलांच पैड\nउपग्रह प्रक्षेपण यान लॉन्च पैड\nइस लांच पैड से उपग्रह प्रक्षेपण यान और संवर्धित उपग्रह प्रक्षेपण यान को लांच किया गया था। यह वर्तमान प्रक्षेपण स्थल के दक्षिणी सिरे पर स्थित है। इसे सेवामुक्त कर दिया गया है। शुरू में इसे उपग्रह प्रक्षेपण यान लांच करने के लिए बनाया गया था। लेकिन बाद में इसे संवर्धित उपग्रह प्रक्षेपण यान प्रक्षेपण परिसर के रूप में इस्तेमाल किया गया था।\nप्रथम लांच पैड\nद्वितीय लॉन्च पैड\nतृतीय लांच पैड\nसन्दर्भ श्रेणी:भारतीय अंतरिक्ष अनुसंधान संगठन\nश्रेणी:भारत के रॉकेट प्रक्षेपण स्थल"
datasets:
- abhishek/autonlp-data-hindi-question-answering
co2_eq_emissions: 39.76330395590446
---
# Model Trained Using AutoNLP
- Problem type: Extractive Question Answering
- CO2 Emissions (in grams): 39.76330395590446
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-hindi-question-answering-23865268
```
Or Python API:
```
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-hindi-question-answering-23865268", use_auth_token=True)
from transformers import BertTokenizer, BertForQuestionAnswering
question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
``` |
adamlin/ml999_matal_bed | db0898faa751675a3c1022a9c9beda7c0f1b4b22 | 2021-12-20T16:47:36.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_matal_bed | 10 | null | transformers | 11,580 | Entry not found |
adamlin/ml999_power_punching_and_shearing_machinery | bf92562d8257c28fa68fbb4e307be32ece7d0cc0 | 2021-12-20T16:54:51.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_power_punching_and_shearing_machinery | 10 | null | transformers | 11,581 | Entry not found |
adamlin/ml999_power_stacker | 006c65e31658d3a701a9343856d70156d2256045 | 2021-12-20T16:53:28.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | adamlin | null | adamlin/ml999_power_stacker | 10 | null | transformers | 11,582 | Entry not found |
addy88/perceiver_imdb | b6bc811da897db11ab1c5ef848069cf8e625a511 | 2022-01-02T11:20:07.000Z | [
"pytorch",
"perceiver",
"text-classification",
"transformers"
]
| text-classification | false | addy88 | null | addy88/perceiver_imdb | 10 | null | transformers | 11,583 | ### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverTokenizer, PerceiverForMaskedLM
tokenizer = PerceiverTokenizer.from_pretrained("addy88/perceiver_imdb")
model = PerceiverForMaskedLM.from_pretrained("addy88/perceiver_imdb")
text = "This is an incomplete sentence where some words are missing."
# prepare input
encoding = tokenizer(text, padding="max_length", return_tensors="pt")
# mask " missing.". Note that the model performs much better if the masked span starts with a space.
encoding.input_ids[0, 52:61] = tokenizer.mask_token_id
inputs, input_mask = encoding.input_ids.to(device), encoding.attention_mask.to(device)
# forward pass
outputs = model(inputs=inputs, attention_mask=input_mask)
logits = outputs.logits
masked_tokens_predictions = logits[0, 51:61].argmax(dim=-1)
print(tokenizer.decode(masked_tokens_predictions))
>>> should print " missing."
``` |
airKlizz/bart-large-multi-fr-wiki-news | 35aa402131777752ca87afdd426ba9b515cab5b0 | 2021-10-17T20:10:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"fr",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/bart-large-multi-fr-wiki-news | 10 | null | transformers | 11,584 | ---
language: fr
license: mit
---
|
airKlizz/mt5-base-wikinewssum-german | fb4cd1036751a47f10c7a5d8e15d72fe7c604896 | 2021-12-25T15:13:41.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-german | 10 | null | transformers | 11,585 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-german
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-german
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5135
- Rouge1: 8.0553
- Rouge2: 2.7846
- Rougel: 6.2182
- Rougelsum: 7.6203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 723 | 2.7112 | 7.3681 | 2.3679 | 5.5705 | 6.7588 |
| No log | 2.0 | 1446 | 2.6178 | 7.8539 | 2.7551 | 6.2081 | 7.4139 |
| No log | 3.0 | 2169 | 2.5756 | 7.8401 | 2.6075 | 6.0135 | 7.4303 |
| No log | 4.0 | 2892 | 2.5465 | 8.1097 | 2.8525 | 6.268 | 7.6482 |
| 3.4589 | 5.0 | 3615 | 2.5315 | 8.0192 | 2.7848 | 6.2484 | 7.5859 |
| 3.4589 | 6.0 | 4338 | 2.5222 | 8.1063 | 2.8986 | 6.337 | 7.6564 |
| 3.4589 | 7.0 | 5061 | 2.5136 | 8.0565 | 2.8707 | 6.2732 | 7.6105 |
| 3.4589 | 8.0 | 5784 | 2.5135 | 8.0553 | 2.7846 | 6.2182 | 7.6203 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
akahana/indonesia-sentiment-roberta | 5bbecef6101e5a5c3f4f4f5d1a72ee7653a5da1a | 2021-12-07T04:26:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"id",
"transformers"
]
| text-classification | false | akahana | null | akahana/indonesia-sentiment-roberta | 10 | null | transformers | 11,586 | ---
language: "id"
widget:
- text: "dia orang yang baik ya bunds."
---
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-sentiment-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
``` |
akdeniz27/convbert-base-turkish-cased-ner | f23c5c89ed519c6970942119fb97a6a966d4a0ba | 2021-09-15T17:02:16.000Z | [
"pytorch",
"convbert",
"token-classification",
"tr",
"arxiv:2008.02496",
"transformers",
"autotrain_compatible"
]
| token-classification | false | akdeniz27 | null | akdeniz27/convbert-base-turkish-cased-ner | 10 | null | transformers | 11,587 | ---
language: tr
widget:
- text: "Almanya, koronavirüs aşısını geliştiren Dr. Özlem Türeci ve eşi Prof. Dr. Uğur Şahin'e liyakat nişanı verdi"
---
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of dbmdz/convbert-base-turkish-cased (ConvBERTurk)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/convbert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/convbert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/convbert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
# Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
```
# Reference test results:
* accuracy: 0.9937648915431506
* f1: 0.9610945644080416
* precision: 0.9619899385131359
* recall: 0.9602008554956295 |
alireza7/PEGASUS-persian-base-wiki-summary | 8a1f0e3d8d17d3be154856d3d31ec27501e00af6 | 2021-09-29T19:26:15.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-wiki-summary | 10 | null | transformers | 11,588 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
allenai/dsp_roberta_base_dapt_biomed_tapt_rct_500 | 5be845b8cab84d67dc3ccf8d9d7ffd5aceea445c | 2021-05-20T13:07:27.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_dapt_biomed_tapt_rct_500 | 10 | 1 | transformers | 11,589 | Entry not found |
aloxatel/KS8 | 95c7a7cb2324822fee5bb1409c6145ce57245593 | 2021-05-20T13:52:21.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | aloxatel | null | aloxatel/KS8 | 10 | null | transformers | 11,590 | Entry not found |
am4nsolanki/autonlp-text-hateful-memes-36789092 | 91900b9a7bc52ba53cb9d3fc1e61a2350d30bfba | 2021-11-28T22:35:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:am4nsolanki/autonlp-data-text-hateful-memes",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | am4nsolanki | null | am4nsolanki/autonlp-text-hateful-memes-36789092 | 10 | 1 | transformers | 11,591 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- am4nsolanki/autonlp-data-text-hateful-memes
co2_eq_emissions: 1.4280361775467445
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36789092
- CO2 Emissions (in grams): 1.4280361775467445
## Validation Metrics
- Loss: 0.5255328416824341
- Accuracy: 0.7666078777189889
- Precision: 0.6913123844731978
- Recall: 0.6192052980132451
- AUC: 0.7893359070795125
- F1: 0.6532751091703057
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/am4nsolanki/autonlp-text-hateful-memes-36789092
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
amoux/scibert_nli_squad | 1cf47b25d327491436d6aaf0d151ee671ae2cc8a | 2021-05-18T23:36:56.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | amoux | null | amoux/scibert_nli_squad | 10 | null | transformers | 11,592 | Entry not found |
andi611/distilbert-base-uncased-ner-conll2003 | 3a0d1d69958cfedaa289da6e2c1e134958d42ba9 | 2021-07-03T13:08:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | andi611 | null | andi611/distilbert-base-uncased-ner-conll2003 | 10 | null | transformers | 11,593 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.985193893275295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0664
- Precision: 0.9332
- Recall: 0.9423
- F1: 0.9377
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2042 | 1.0 | 878 | 0.0636 | 0.9230 | 0.9253 | 0.9241 | 0.9822 |
| 0.0428 | 2.0 | 1756 | 0.0577 | 0.9286 | 0.9370 | 0.9328 | 0.9841 |
| 0.0199 | 3.0 | 2634 | 0.0606 | 0.9364 | 0.9401 | 0.9383 | 0.9851 |
| 0.0121 | 4.0 | 3512 | 0.0641 | 0.9339 | 0.9380 | 0.9360 | 0.9847 |
| 0.0079 | 5.0 | 4390 | 0.0664 | 0.9332 | 0.9423 | 0.9377 | 0.9852 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
annafavaro/distilbert-base-uncased-finetuned-cola | 3bf191f85e4e0159a0c039ddd00f5a2afc3e877d | 2021-12-01T05:13:59.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | annafavaro | null | annafavaro/distilbert-base-uncased-finetuned-cola | 10 | null | transformers | 11,594 | Entry not found |
anton-l/wav2vec2-large-xlsr-53-kyrgyz | 6b67fd3e70ce4e0d40d2a6cc98a84c3272b24d65 | 2021-07-05T19:53:54.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ky",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-kyrgyz | 10 | null | transformers | 11,595 | ---
language: ky
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Kyrgyz XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ky
type: common_voice
args: ky
metrics:
- name: Test WER
type: wer
value: 31.88
---
# Wav2Vec2-Large-XLSR-53-Kyrgyz
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kyrgyz using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ky", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Kyrgyz test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ky.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-kyrgyz")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ky/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/ky/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 31.88 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
anukaver/xlm-roberta-est-qa | 31a799247a6af5dd3e476afd6c71a148d1edb280 | 2021-04-27T10:47:18.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"dataset:squad",
"dataset:anukaver/EstQA",
"transformers",
"autotrain_compatible"
]
| question-answering | false | anukaver | null | anukaver/xlm-roberta-est-qa | 10 | null | transformers | 11,596 | ---
tags:
- question-answering
datasets:
- squad
- anukaver/EstQA
---
# Question answering model for Estonian
This is a question answering model based on XLM-Roberta base model. It is fine-tuned subsequentially on:
1. English SQuAD v1.1
2. SQuAD v1.1 translated into Estonian
3. Small native Estonian dataset (800 samples)
The model has retained good multilingual properties and can be used for extractive QA tasks in all languages included in XLM-Roberta. The performance is best in the fine-tuning languages of Estonian and English.
| Tested on | F1 | EM |
| ----------- | --- | --- |
| EstQA test set | 82.4 | 75.3 |
| SQuAD v1.1 dev set | 86.9 | 77.9 |
The Estonian dataset used for fine-tuning and validating results is available in https://huggingface.co/datasets/anukaver/EstQA/ (version 1.0) |
arampacha/clip-rsicd-v5 | fd7394456f27c25b97def109edcadd8e3b92ce8b | 2021-07-17T09:59:40.000Z | [
"pytorch",
"jax",
"clip",
"feature-extraction",
"transformers"
]
| feature-extraction | false | arampacha | null | arampacha/clip-rsicd-v5 | 10 | null | transformers | 11,597 | Entry not found |
arnolfokam/bert-base-uncased-pcm | 5cfa0dd8e8d571a3940fc48d14bd539567fb7b83 | 2021-11-24T21:14:03.000Z | [
"pytorch",
"bert",
"token-classification",
"pcm",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/bert-base-uncased-pcm | 10 | null | transformers | 11,598 | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
---
# Model description
**bert-base-uncased-pcm** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-pcm**| 88.61 | 84.17 | 86.33
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` |
arnolfokam/mbert-base-uncased-ner-swa | f8e792125e11fd54585043957eb9107472ea2ce1 | 2021-11-24T11:31:30.000Z | [
"pytorch",
"bert",
"token-classification",
"swa",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/mbert-base-uncased-ner-swa | 10 | null | transformers | 11,599 | ---
language:
- swa
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
---
# Model description
**mbert-base-uncased-ner-swa** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-ner-swa**| 82.85 | 88.13 | 85.41
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-swa")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.